WorldWideScience

Sample records for bayesian expectation maximization

  1. Bayesian tracking of multiple point targets using expectation maximization

    DEFF Research Database (Denmark)

    Selvan, Raghavendra

    The range of applications where target tracking is useful has grown well beyond the classical military and radar-based tracking applications. With the increasing enthusiasm in autonomous solutions for vehicular and robotics navigation, much of the maneuverability can be provided based on solutions...... the measurements from sensors to choose the best data association hypothesis, from which the estimates of target trajectories can be obtained. In an ideal world, we could maintain all possible data association hypotheses from observing all measurements, and pick the best hypothesis. But, it turns out the number...... joint density is maximized over the data association variables, or over the target state variables, two EM-based algorithms for tracking multiple point targets are derived, implemented and evaluated. In the first algorithm, the data association variable is integrated out, and the target states...

  2. Uncovering Gene Regulatory Networks from Time-Series Microarray Data with Variational Bayesian Structural Expectation Maximization

    Directory of Open Access Journals (Sweden)

    Huang Yufei

    2007-01-01

    Full Text Available We investigate in this paper reverse engineering of gene regulatory networks from time-series microarray data. We apply dynamic Bayesian networks (DBNs for modeling cell cycle regulations. In developing a network inference algorithm, we focus on soft solutions that can provide a posteriori probability (APP of network topology. In particular, we propose a variational Bayesian structural expectation maximization algorithm that can learn the posterior distribution of the network model parameters and topology jointly. We also show how the obtained APPs of the network topology can be used in a Bayesian data integration strategy to integrate two different microarray data sets. The proposed VBSEM algorithm has been tested on yeast cell cycle data sets. To evaluate the confidence of the inferred networks, we apply a moving block bootstrap method. The inferred network is validated by comparing it to the KEGG pathway map.

  3. Uncovering Gene Regulatory Networks from Time-Series Microarray Data with Variational Bayesian Structural Expectation Maximization

    Directory of Open Access Journals (Sweden)

    Isabel Tienda Luna

    2007-06-01

    Full Text Available We investigate in this paper reverse engineering of gene regulatory networks from time-series microarray data. We apply dynamic Bayesian networks (DBNs for modeling cell cycle regulations. In developing a network inference algorithm, we focus on soft solutions that can provide a posteriori probability (APP of network topology. In particular, we propose a variational Bayesian structural expectation maximization algorithm that can learn the posterior distribution of the network model parameters and topology jointly. We also show how the obtained APPs of the network topology can be used in a Bayesian data integration strategy to integrate two different microarray data sets. The proposed VBSEM algorithm has been tested on yeast cell cycle data sets. To evaluate the confidence of the inferred networks, we apply a moving block bootstrap method. The inferred network is validated by comparing it to the KEGG pathway map.

  4. Modelling Transcriptional Regulation with a Mixture of Factor Analyzers and Variational Bayesian Expectation Maximization

    Directory of Open Access Journals (Sweden)

    Kuang Lin

    2009-01-01

    Full Text Available Understanding the mechanisms of gene transcriptional regulation through analysis of high-throughput postgenomic data is one of the central problems of computational systems biology. Various approaches have been proposed, but most of them fail to address at least one of the following objectives: (1 allow for the fact that transcription factors are potentially subject to posttranscriptional regulation; (2 allow for the fact that transcription factors cooperate as a functional complex in regulating gene expression, and (3 provide a model and a learning algorithm with manageable computational complexity. The objective of the present study is to propose and test a method that addresses these three issues. The model we employ is a mixture of factor analyzers, in which the latent variables correspond to different transcription factors, grouped into complexes or modules. We pursue inference in a Bayesian framework, using the Variational Bayesian Expectation Maximization (VBEM algorithm for approximate inference of the posterior distributions of the model parameters, and estimation of a lower bound on the marginal likelihood for model selection. We have evaluated the performance of the proposed method on three criteria: activity profile reconstruction, gene clustering, and network inference.

  5. Expectation Maximization Segmentation

    OpenAIRE

    Bergman, Niclas

    1998-01-01

    This report reviews the Expectation Maximization EM algorithm and applies it to the data segmentation problem yielding the Expectation Maximization Segmentation EMS algorithm The EMS algorithm requires batch processing of the data and can be applied to mode switching or jumping linear dynamical state space models The EMS algorithm consists of an optimal fusion of fixed interval Kalman smoothing and discrete optimization. The next section gives a short introduction to the EM algorithm with som...

  6. Practical Privacy For Expectation Maximization

    OpenAIRE

    Park, Mijung; Foulds, Jimmy; Chaudhuri, Kamalika; Welling, Max

    2016-01-01

    Expectation maximization (EM) is an iterative algorithm that computes maximum likelihood and maximum a posteriori estimates for models with unobserved variables. While widely used, the iterative nature of EM presents challenges for privacy-preserving estimation. Multiple iterations are required to obtain accurate parameter estimates, yet each iteration increases the amount of noise that must be added to achieve a reasonable degree of privacy. We propose a practical algorithm that overcomes th...

  7. Spatial based Expectation Maximizing (EM)

    OpenAIRE

    Balafar M A

    2011-01-01

    Abstract Background Expectation maximizing (EM) is one of the common approaches for image segmentation. Methods an improvement of the EM algorithm is proposed and its effectiveness for MRI brain image segmentation is investigated. In order to improve EM performance, the proposed algorithms incorporates neighbourhood information into the clustering process. At first, average image is obtained as neighbourhood information and then it is incorporated in clustering process. Also, as an option, us...

  8. Expectation maximization applied to GMTI convoy tracking

    Science.gov (United States)

    Koch, Wolfgang

    2002-08-01

    Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used

  9. Why Contextual Preference Reversals Maximize Expected Value

    Science.gov (United States)

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  10. Tracking Articulated Bodies using Generalized Expectation Maximization

    OpenAIRE

    Fossati, Andrea; Arnaud, Elise; Horaud, Radu; Fua, Pascal

    2008-01-01

    A Generalized Expectation Maximization (GEM) algorithm is used to retrieve the pose of a person from a monocular video sequence shot with a moving camera. After embedding the set of possible poses in a low dimensional space using Principal Component Analysis, the configuration that gives the best match to the input image is held as estimate for the current frame. This match is computed iterating GEM to assign edge pixels to the correct body part and to find the body pose that maximizes the li...

  11. Expectation-maximization for logistic regression

    OpenAIRE

    Scott, James G.; Sun, Liang

    2013-01-01

    We present a family of expectation-maximization (EM) algorithms for binary and negative-binomial logistic regression, drawing a sharp connection with the variational-Bayes algorithm of Jaakkola and Jordan (2000). Indeed, our results allow a version of this variational-Bayes approach to be re-interpreted as a true EM algorithm. We study several interesting features of the algorithm, and of this previously unrecognized connection with variational Bayes. We also generalize the approach to sparsi...

  12. Expectation Maximization and Mixture Modeling Tutorial

    OpenAIRE

    Dinov, Ivo D.

    2008-01-01

    This technical report describes the statistical method of expectation maximization (EM) for parameter estimation. Several of 1D, 2D, 3D and n-D examples are presented in this document. Applications of the EM method are also demonstrated in the case of mixture modeling using interactive Java applets in 1D (e.g., curve fitting), 2D (e.g., point clustering and classification) and 3D (e.g., brain tissue classification).

  13. Expectation-Maximization Approach to Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    Piscataway: IEEE, 2011, s. 559-566. ISBN 978-1-4244-9636-5. [IJCNN 2011. International Joint Conference on Neural Networks. San Jose (US), 31.07.2011-05.08.2011] R&D Projects: GA ČR GAP202/10/0262; GA ČR GA205/09/1079; GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean factor analysis * bars problem * dendritic inhibition * expectation-maximization * neural network application * statistics Subject RIV: IN - Informatics, Computer Science

  14. On the Expectation-Maximization Unfolding with Smoothing

    CERN Document Server

    Volobouev, Igor

    2014-01-01

    Error propagation formulae are derived for the expectation-maximization iterative unfolding algorithm regularized by a smoothing step. The effective number of parameters in the fit to the observed data is defined for unfolding procedures. Based upon this definition, the Akaike information criterion is proposed as a principle for choosing the smoothing parameters in an automatic, data-dependent manner. The performance and the frequentist coverage of the resulting method are investigated using simulated samples. A number of issues of general relevance to all unfolding techniques are discussed, including irreducible bias, uncertainty increase due to a data-dependent choice of regularization strength, and presentation of results.

  15. Joint Iterative Carrier Synchronization and Signal Detection Employing Expectation Maximization

    DEFF Research Database (Denmark)

    Zibar, Darko; de Carvalho, Luis Henrique Hecker; Estaran Tolosa, Jose Manuel;

    2014-01-01

    . The algorithm is tested in a mixed line rate optical transmission scenario employing dual polarization 448 Gb/s 16-QAM signal surrounded by eight on-off keying channels in a 50 GHz grid. It is shown that joint carrier synchronization and data detection are more robust towards optical transmitter......In this paper, joint estimation of carrier frequency, phase, signal means and noise variance, in a maximum likelihood sense, is performed iteratively by employing expectation maximization. The parameter estimation is soft decision driven and allows joint carrier synchronization and data detection...... impairments and nonlinear phase noise, compared to digital phase-locked loop (PLL) followed by hard decisions. Additionally, soft decision driven joint carrier synchronization and detection offers an improvement of 0.5 dB in terms of input power compared to hard decision digital PLL based carrier...

  16. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  17. Parallel Expectation-Maximization Algorithm for Large Databases

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; SONG Han-tao; LU Yu-chang

    2006-01-01

    A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in generic statistical problems, the EM algorithm has been widely used in many domains. But it often requires significant computational resources. So it is needed to develop more elaborate methods to adapt the databases to a large number of records or large dimensionality. The parallel EM algorithm is based on partial E-steps which has the standard convergence guarantee of EM. The algorithm utilizes fully the advantage of parallel computation. It was confirmed that the algorithm obtains about 2.6 speedups in contrast with the standard EM algorithm through its application to large databases. The running time will decrease near linearly when the number of processors increasing.

  18. Expectation-Maximization Binary Clustering for Behavioural Annotation.

    Science.gov (United States)

    Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631

  19. Generalized expectation-maximization segmentation of brain MR images

    Science.gov (United States)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  20. The Noisy Expectation-Maximization Algorithm for Multiplicative Noise Injection

    Science.gov (United States)

    Osoba, Osonde; Kosko, Bart

    2016-03-01

    We generalize the noisy expectation-maximization (NEM) algorithm to allow arbitrary modes of noise injection besides just adding noise to the data. The noise must still satisfy a NEM positivity condition. This generalization includes the important special case of multiplicative noise injection. A generalized NEM theorem shows that all measurable modes of injecting noise will speed the average convergence of the EM algorithm if the noise satisfies a generalized NEM positivity condition. This noise-benefit condition has a simple quadratic form for Gaussian and Cauchy mixture models in the case of multiplicative noise injection. Simulations show a multiplicative-noise EM speed-up of more than 27% in a simple Gaussian mixture model. Injecting blind noise only slowed convergence. A related theorem gives a sufficient condition for an average EM noise benefit for arbitrary modes of noise injection if the data model comes from the general exponential family of probability density functions. A final theorem shows that injected noise slows EM convergence on average if the NEM inequalities reverse and the noise satisfies a negativity condition.

  1. Expectation maximization reconstruction for circular orbit cone-beam CT

    Science.gov (United States)

    Dong, Baoyu

    2008-03-01

    Cone-beam computed tomography (CBCT) is a technique for imaging cross-sections of an object using a series of X-ray measurements taken from different angles around the object. It has been widely applied in diagnostic medicine and industrial non-destructive testing. Traditional CT reconstructions are limited by many kinds of artifacts, and they give dissatisfactory image. To reduce image noise and artifacts, we propose a statistical iterative approach for cone-beam CT reconstruction. First the theory of maximum likelihood estimation is extended to X-ray scan, and an expectation-maximization (EM) formula is deduced for direct reconstruction of circular orbit cone-beam CT. Then the EM formula is implemented in cone-beam geometry for artifact reduction. EM algorithm is a feasible iterative method, which is based on the statistical properties of Poisson distribution. It can provide good quality reconstructions after a few iterations for cone-beam CT. In the end, experimental results with computer simulated data and real CT data are presented to verify our method is effective.

  2. Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo;

    2012-01-01

    We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth.......We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....

  3. Expectation-Maximization for Learning Determinantal Point Processes

    OpenAIRE

    Gillenwater, Jennifer; Kulesza, Alex; Fox, Emily; Taskar, Ben

    2014-01-01

    A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a given task, we would like to learn the entries of its kernel matrix by maximizing the log-likelihood of the available data. However, log-likelihood is non-convex in the entries of the kernel matrix, and this learning problem is conjectured to be NP-hard. Thus, previous work has instead focused on more restricted convex learning sett...

  4. Boolean Factor Analysis by Expectation-Maximization Method

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    Heidelberg : Springer, 2013 - (Kudělka, M.; Pokorný, J.; Snášel, V.; Abraham, A.), s. 243-254 ISBN 978-3-642-31602-9. ISSN 2194-5357. - (Advances in Intelligent Systems and Computing. 179). [IHCI 2011. International Conference on Intelligent Human Computer Interaction /3./. Prague (CZ), 29.08.2011-31.08.2011] R&D Projects: GA ČR GAP202/10/0262; GA ČR GA205/09/1079 Grant ostatní: GA MŠk(CZ) ED1.1.00/02.0070 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural networks * hidden pattern search * Boolean factor analysis * generative model * information redundancy * exceptation-maximization Subject RIV: IN - Informatics, Computer Science

  5. Expectation-maximization for Bayes-adaptive POMDPs

    OpenAIRE

    Erik P. Vargo; Randy Cogill

    2015-01-01

    Bayes-adaptive POMDPs (BAPOMDPs) are partially observable Markov decision problems in which uncertainty in the state-transition and observation-emission probabilities can be captured by a prior distribution over the model parameters. Existing approaches to solving BAPOMDPs rely on model and trajectory sampling to guide exploration and, because of the curse of dimensionality, do not scale well when the degree of model uncertainty is large. In this paper, we begin by presenting two expectation-...

  6. Single-Trial Extraction of Pure Somatosensory Evoked Potential Based on Expectation Maximization Approach.

    Science.gov (United States)

    Chen, Wei; Chang, Chunqi; Hu, Yong

    2016-01-01

    It is of great importance for intraoperative monitoring to accurately extract somatosensory evoked potentials (SEPs) and track its changes fast. Currently, multi-trial averaging is widely adopted for SEP signal extraction. However, because of the loss of variations related to SEP features across different trials, the estimated SEPs in such a way are not suitable for the purpose of real-time monitoring of every single trial of SEP. In order to handle this issue, a number of single-trial SEP extraction approaches have been developed in the literature, such as ARX and SOBI, but most of them have their performance limited due to not sufficient utilization of multi-trial and multi-condition structures of the signals. In this paper, a novel Bayesian model of SEP signals is proposed to make systemic use of multi-trial and multi-condition priors and other structural information in the signal by integrating both a cortical source propagation model and a SEP basis components model, and an Expectation Maximization (EM) algorithm is developed for single-trial SEP estimation under this model. Numerical simulations demonstrate that the developed method can provide reasonably good single-trial estimations of SEP as long as signal-to-noise ratio (SNR) of the measurements is no worse than -25 dB. The effectiveness of the proposed method is further verified by its application to real SEP measurements of a number of different subjects during spinal surgeries. It is observed that using the proposed approach the main SEP features (i.e., latencies) can be reliably estimated at single-trial basis, and thus the variation of latencies in different trials can be traced, which provides a solid support for surgical intraoperative monitoring. PMID:26742104

  7. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  8. Estimating Rigid Transformation Between Two Range Maps Using Expectation Maximization Algorithm

    CERN Document Server

    Zeng, Shuqing

    2012-01-01

    We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.

  9. Anticipated utility and rational expectations as approximations of Bayesian decision making

    OpenAIRE

    Cogley, Timothy W.; Sargent, Thomas J.

    2005-01-01

    For a Markov decision problem in which unknown transition probabilities serve as hidden state variables, we study the quality of two approximations to the decision rule of a Bayesian who each period updates his subjective distribu- tion over the transition probabilities by Bayes’ law. The first is the usual ratio- nal expectations approximation that assumes that the decision maker knows the transition probabilities. The second approximation is a version of Kreps’ (1998) anticipated utility mo...

  10. Bayesian Forecasting of US Growth using Basic Time Varying Parameter Models and Expectations Data

    OpenAIRE

    Basturk, Nalan; Ceyhan, Pinar; Dijk, Herman

    2014-01-01

    markdownabstract__Abstract__ Time varying patterns in US growth are analyzed using various univariate model structures, starting from a naive model structure where all features change every period to a model where the slow variation in the conditional mean and changes in the conditional variance are specified together with their interaction, including survey data on expected growth in order to strengthen the information in the model. Use is made of a simulation based Bayesian inferential meth...

  11. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Science.gov (United States)

    Rujirakul, Kanokmon; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA. PMID:24955405

  12. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  13. Detection of Moroccan Coastal Upwelling in SST images using the Expectation-Maximization

    OpenAIRE

    Tamim, Ayoub; Minaoui, Khalid; Daoudi, Khalid; Atillah, Abderrahman; Aboutajdine, Driss

    2014-01-01

    International audience This paper proposes an unsupervised algorithm for automatic detection and segmentation of upwelling region in Moroccan Atlantic coast using the Sea Surface Temperature (SST) satellite images. This has been done by exploring the Expectation-Maximization algorithm. The good number of clus- ters that best reproduces the shape of upwelling areas is selected by using the two popular Davies-Bouldin and Dunn indices. Area opening technique is developed that is used to remov...

  14. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    OpenAIRE

    Dengfu Zhao; Zhiping Chen; Qihong Duan

    2010-01-01

    In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transi...

  15. Convergence of the Monte Carlo expectation maximization for curved exponential families

    OpenAIRE

    Fort, Gersende; Moulines, Eric

    2003-01-01

    The Monte Carlo expectation maximization (MCEM) algorithm is a versatile tool for inference in incomplete data models, especially when used in combination with Markov chain Monte Carlo simulation methods. In this contribution, the almost-sure convergence of the MCEM algorithm is established. It is shown, using uniform versions of ergodic theorems for Markov chains, that MCEM converges under weak conditions on the simulation kernel. Practical illustrations are presented, using a hybrid random ...

  16. Online Expectation Maximization based algorithms for inference in hidden Markov models

    OpenAIRE

    Le Corff, Sylvain; Fort, Gersende

    2011-01-01

    The Expectation Maximization (EM) algorithm is a versatile tool for model parameter estimation in latent data models. When processing large data sets or data stream however, EM becomes intractable since it requires the whole data set to be available at each iteration of the algorithm. In this contribution, a new generic online EM algorithm for model parameter inference in general Hidden Markov Model is proposed. This new algorithm updates the parameter estimate after a block of observations i...

  17. Expectation-Maximization Method for EEG-Based Continuous Cursor Control

    Directory of Open Access Journals (Sweden)

    Yixiao Wang

    2007-01-01

    Full Text Available To develop effective learning algorithms for continuous prediction of cursor movement using EEG signals is a challenging research issue in brain-computer interface (BCI. In this paper, we propose a novel statistical approach based on expectation-maximization (EM method to learn the parameters of a classifier for EEG-based cursor control. To train a classifier for continuous prediction, trials in training data-set are first divided into segments. The difficulty is that the actual intention (label at each time interval (segment is unknown. To handle the uncertainty of the segment label, we treat the unknown labels as the hidden variables in the lower bound on the log posterior and maximize this lower bound via an EM-like algorithm. Experimental results have shown that the averaged accuracy of the proposed method is among the best.

  18. Expectation-Maximization Algorithms for Obtaining Estimations of Generalized Failure Intensity Parameters

    Directory of Open Access Journals (Sweden)

    Makram KRIT

    2016-01-01

    Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.

  19. 3D Ordered Subset Expectation Maximization (OSEM) Algorithm for a Compton camera

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Soo Mee; Lee, Jae Sung; Kim, Joong Hyun; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lee, Mi No; Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of); Lee, Ju Hahn; Lee, Chun Sik [Chungang University, Seoul (Korea, Republic of); Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)

    2007-07-01

    The Compton camera is an single-photon imaging device that employs an electronic collimation based on the relationship between energy transfer and Compton scattering angle of {gamma}-ray in the detector. In this study, the expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle was applied to the problem of image reconstruction for a Compton camera, which is known to be computationally challenging. This study also compared several methods of constructing subsets for the optimal performance of these algorithms.

  20. Impact of expectation-maximization reconstruction iterations on the diagnosis of temporal lobe epilepsy with PET

    OpenAIRE

    Floberg, John M.; Struck, Aaron F; Peters, Brooke K; Jaskowiak, Christine J; Perlman, Scott B; Hall, Lance T

    2012-01-01

    There is a well known tradeoff between image noise and image sharpness that is dependent on the number of iterations performed in ordered subset expectation maximization (OSEM) reconstruction of PET data. We aim to evaluate the impact of this tradeoff on the sensitivity and specificity of 18F-FDG PET for the diagnosis of temporal lobe epilepsy. A retrospective blinded reader study was performed on two OSEM reconstructions, using either 2 or 5 iterations, of 32 18F-FDG PET studies acquired at ...

  1. Nonlinear impairment compensation using expectation maximization for dispersion managed and unmanaged PDM 16-QAM transmission

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo; Borkowski, Robert; Caballero Jambrina, Antonio; Arlunno, Valeria; Schmidt, Mikkel Nørgaard; Guerrero Gonzalez, Neil; Mao, Bangning; Ye, Yabin; Larsen, Knud J.; Tafur Monroy, Idelfonso

    2012-01-01

    In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm...... that can be used to compensate for the impairments which have an imprint on a signal constellation, i.e. rotation and distortion of the constellation points. The EM is especially effective for combating non-linear phase noise (NLPN). It is because NLPN severely distorts the signal constellation and...

  2. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    Science.gov (United States)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-04-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design.

  3. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    Science.gov (United States)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  4. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  5. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    International Nuclear Information System (INIS)

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  6. String-averaging expectation-maximization for maximum likelihood estimation in emission tomography

    International Nuclear Information System (INIS)

    We study the maximum likelihood model in emission tomography and propose a new family of algorithms for its solution, called string-averaging expectation-maximization (SAEM). In the string-averaging algorithmic regime, the index set of all underlying equations is split into subsets, called ‘strings’, and the algorithm separately proceeds along each string, possibly in parallel. Then, the end-points of all strings are averaged to form the next iterate. SAEM algorithms with several strings present better practical merits than the classical row-action maximum-likelihood algorithm. We present numerical experiments showing the effectiveness of the algorithmic scheme, using data of image reconstruction problems. Performance is evaluated from the computational cost and reconstruction quality viewpoints. A complete convergence theory is also provided. (paper)

  7. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Jin [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Shi Daxin [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Anastasio, Mark A [Department of Biomedical Engineering, Illinois Institute of Technology, 10 West 32nd Street, E1-116, Chicago, IL 60616 (United States); Sillanpaa, Jussi [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10021 (United States); Chang Jenghwa [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10021 (United States)

    2005-11-07

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  8. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, Georgios I.; Matthews, Julian C.; Markiewicz, Pawel J.; Kotasidis, Fotis A. [Manchester Univ. (United Kingdom). Dept. of Cancer and Enabling Sciences; Lionheart, William R. [Manchester Univ. (United Kingdom). School of Mathematics; Reader, Andrew J. [McGill Univ., Montreal, QC (Canada). Brain Imaging Centre

    2011-07-01

    Recent studies have demonstrated the benefits of a resolution model within the reconstruction algorithm in an attempt to account for those effects that degrade the resolution of an image. However, these algorithms usually suffer from slower convergence rates due to the additional need to solve an image resolution deconvolution problem. In this work we investigate a newly proposed algorithm, which decouples the tomographic and image resolution problems within an image based expectation maximization (EM) framework. Results showed that convergence can be accelerated by interleaving multiple iterations of an image based EM algorithm solving the resolution model problem with EM iterations solving the tomographic problem. Minor differences are observed using the proposed nested algorithm compared to the single iteration normally performed when optimal number of iterations are performed for each algorithm. However using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer iterations. This may be of particular benefit for slowly converging portions of the image. (orig.)

  9. Clustering performance comparison using K-means and expectation maximization algorithms

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-01-01

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610

  10. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp [Bank of Tokyo-Mitsubishi UFJ, Ltd., Corporate Risk Management Division (Japan); Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp [Osaka University, Division of Mathematical Science for Social Systems, Graduate School of Engineering Science (Japan); Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it [Universita di Padova, Dipartimento di Matematica Pura ed Applicata (Italy)

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  11. Expectation Maximization and the retrieval of the atmospheric extinction coefficients by inversion of Raman lidar data

    CERN Document Server

    Garbarino, Sara; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele

    2016-01-01

    We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill--posed inverse problem that needs regularization, and we propose to use the Expectation--Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.

  12. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time and that...... EM has more difficulty in recovering the true scale of the coefficients. In this paper, the analysis is extended from cross-sectional data to the less volatile case of panel data to explore the effect on the relative performance of the methods with several realizations of the random coefficients. In...

  13. Unsupervised Gaussian Mixture-Model With Expectation Maximization for Detecting Glaucomatous Progression in Standard Automated Perimetry Visual Fields

    Science.gov (United States)

    Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher

    2016-01-01

    Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250

  14. An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments

    Energy Technology Data Exchange (ETDEWEB)

    Langmead, Christopher James [Dartmouth Computer Science Department (United States); Donald, Bruce Randall [Dartmouth Center for Structural Biology and Computational Chemistry (United States)], E-mail: brd@cs.dartmouth.edu

    2004-06-15

    We report an automated procedure for high-throughput NMR resonance assignment for a protein of known structure, or of an homologous structure. Our algorithm performs Nuclear Vector Replacement (NVR) by Expectation/Maximization (EM) to compute assignments. NVR correlates experimentally-measured NH residual dipolar couplings (RDCs) and chemical shifts to a given a priori whole-protein 3D structural model. The algorithm requires only uniform {sup 15}N-labelling of the protein, and processes unassigned H{sup N}-{sup 15}N HSQC spectra, H{sup N}-{sup 15}N RDCs, and sparse H{sup N}-H{sup N} NOE's (d{sub NN}s). NVR runs in minutes and efficiently assigns the (H{sup N},{sup 15}N) backbone resonances as well as the sparse d{sub NN}s from the 3D {sup 15}N-NOESY spectrum, in O(n{sup 3}) time. The algorithm is demonstrated on NMR data from a 76-residue protein, human ubiquitin, matched to four structures, including one mutant (homolog), determined either by X-ray crystallography or by different NMR experiments (without RDCs). NVR achieves an average assignment accuracy of over 99%. We further demonstrate the feasibility of our algorithm for different and larger proteins, using different combinations of real and simulated NMR data for hen lysozyme (129 residues) and streptococcal protein G (56 residues), matched to a variety of 3D structural models. Abbreviations: NMR, nuclear magnetic resonance; NVR, nuclear vector replacement; RDC, residual dipolar coupling; 3D, three-dimensional; HSQC, heteronuclear single-quantum coherence; H{sup N}, amide proton; NOE, nuclear Overhauser effect; NOESY, nuclear Overhauser effect spectroscopy; d{sub NN}, nuclear Overhauser effect between two amide protons; MR, molecular replacement; SAR, structure activity relation; DOF, degrees of freedom; nt., nucleotides; SPG, Streptococcal protein G; SO(3), special orthogonal (rotation) group in 3D; EM, Expectation/Maximization; SVD, singular value decomposition.

  15. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  16. Receiver operating characteristic (ROC) analysis of images reconstructed with iterative expectation maximization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana

    2001-12-01

    The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)

  17. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm

    Science.gov (United States)

    Liu, Haiguang; Spence, John C.H.

    2014-01-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120

  18. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Science.gov (United States)

    Bhaduri, Kanishka; Srivastava, Ashok N.

    2009-01-01

    This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.

  19. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dae Won [Dankook University, Cheonan (Korea, Republic of)

    2005-02-15

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  20. Speckle reduction for medical ultrasound images with an expectation-maximization framework

    Institute of Scientific and Technical Information of China (English)

    HOU Tao; WANG Yuanyuan; GUO Yi

    2011-01-01

    In view of inherent speckle noise in medical images, a speckle reduction method was proposed based on an expectation-maximization (EM) framework. First, the real component of the in-phase/quadrature (I/Q) ultrasound image is extracted. Then, it is used to blindly estimate the point spread function (PSF) of the imaging system. Finally, based on the EM framework, an iterative algorithm alternating between the Wiener Filter and the anisotropic diffusion (AD) is exploited to produce despeckled images. The comparison experiment is carried out on both simulated and in vivo ultrasound images. It is shown that, with respect to the I/Q image, the proposed method averagely improves the speckle-signal-to-noise ratio (S-SNR) and the edge preservation index (β) of images by the factor of 1.94 and 7.52. Meanwhile, it averagely reduces the normalized mean-squared error (NMSE) by the factor of 3.95. The simulation and in vivo results indicates that the proposed method has a better overall performance than exited ones.

  1. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    Science.gov (United States)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-05-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method.

  2. The indexing ambiguity in serial femtosecond crystallography (SFX resolved using an expectation maximization algorithm

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2014-11-01

    Full Text Available Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these `stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity, the method is validated.

  3. Numerical estimation of adsorption energy distributions from adsorption isotherm data with the expectation-maximization method

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, B.J.; Guiochon, G. [Tennessee Univ., Knoxville, TN (United States). Dept. of Chemistry]|[Oak Ridge National Lab., TN (United States)

    1993-08-01

    The expectation-maximization (EM) method of parameter estimation is used to calculate adsorption energy distributions of molecular probes from their adsorption isotherms. EM does not require prior knowledge of the distribution function or the isotherm, requires no smoothing of the isotherm data, and converges with high stability towards the maximum-likelihood estimate. The method is therefore robust and accurate at high iteration numbers. The EM algorithm is tested with simulated energy distributions corresponding to unimodal Gaussian, bimodal Gaussian, Poisson distributions, and the distributions resulting from Misra isotherms. Theoretical isotherms are generated from these distributions using the Langmuir model, and then chromatographic band profiles are computed using the ideal model of chromatography. Noise is then introduced in the theoretical band profiles comparable to those observed experimentally. The isotherm is then calculated using the elution-by-characteristic points method. The energy distribution given by the EM method is compared to the original one. Results are contrasted to those obtained with the House and Jaycock algorithm HILDA, and shown to be superior in terms of robustness, accuracy, and information theory. The effect of undersampling of the high-pressure/low-energy region of the adsorption is reported and discussed for the EM algorithm, as well as the effect of signal-to-noise ratio on the degree of heterogeneity that may be estimated experimentally.

  4. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  5. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  6. Association studies with imputed variants using expectation-maximization likelihood-ratio tests.

    Directory of Open Access Journals (Sweden)

    Kuan-Chieh Huang

    Full Text Available Genotype imputation has become standard practice in modern genetic studies. As sequencing-based reference panels continue to grow, increasingly more markers are being well or better imputed but at the same time, even more markers with relatively low minor allele frequency are being imputed with low imputation quality. Here, we propose new methods that incorporate imputation uncertainty for downstream association analysis, with improved power and/or computational efficiency. We consider two scenarios: I when posterior probabilities of all potential genotypes are estimated; and II when only the one-dimensional summary statistic, imputed dosage, is available. For scenario I, we have developed an expectation-maximization likelihood-ratio test for association based on posterior probabilities. When only imputed dosages are available (scenario II, we first sample the genotype probabilities from its posterior distribution given the dosages, and then apply the EM-LRT on the sampled probabilities. Our simulations show that type I error of the proposed EM-LRT methods under both scenarios are protected. Compared with existing methods, EM-LRT-Prob (for scenario I offers optimal statistical power across a wide spectrum of MAF and imputation quality. EM-LRT-Dose (for scenario II achieves a similar level of statistical power as EM-LRT-Prob and, outperforms the standard Dosage method, especially for markers with relatively low MAF or imputation quality. Applications to two real data sets, the Cebu Longitudinal Health and Nutrition Survey study and the Women's Health Initiative Study, provide further support to the validity and efficiency of our proposed methods.

  7. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of); Kim, Kyeong Min [Korea Institute of Radiology and Medical Sciences, Seoul (Korea, Republic of)

    2005-10-15

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.

  8. Constrained expectation-maximization algorithm for stochastic inertial error modeling: study of feasibility

    International Nuclear Information System (INIS)

    Stochastic modeling is a challenging task for low-cost sensors whose errors can have complex spectral structures. This makes the tuning process of the INS/GNSS Kalman filter often sensitive and difficult. For example, first-order Gauss–Markov processes are very often used in inertial sensor models. But the estimation of their parameters is a non-trivial task if the error structure is mixed with other types of noises. Such an estimation is often attempted by computing and analyzing Allan variance plots. This contribution demonstrates solving situations when the estimation of error parameters by graphical interpretation is rather difficult. The novel strategy performs direct estimation of these parameters by means of the expectation-maximization (EM) algorithm. The algorithm results are first analyzed with a critical and practical point of view using simulations with typically encountered error signals. These simulations show that the EM algorithm seems to perform better than the Allan variance and offers a procedure to estimate first-order Gauss–Markov processes mixed with other types of noises. At the same time, the conducted tests revealed limits of this approach that are related to the convergence and stability issues. Suggestions are given to circumvent or mitigate these problems when complexity of error structure is 'reasonable'. This work also highlights the fact that the suggested approach via EM algorithm and the Allan variance may not be able to estimate the parameters of complex error models reasonably well and shows the need for new estimation procedures to be developed in this context. Finally, an empirical scenario is presented to support the former findings. There, the positive effect of using the more sophisticated EM-based error modeling on a filtered trajectory is highlighted

  9. Subset construction strategy for ordered-subset expectation-maximization reconstruction in compton camera imaging system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Soo Mee [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of); Lee, Jae Sung; Lee, Soo Jin [Paichai University, Daejeon (Korea, Republic of)

    2007-07-01

    In this study, the expectation maximization (EM) and ordered subset EM (OSEM) reconstruction algorithms have been applied to the Compton projection data. For OSEM, we propose the several methods for constructing subsets and compare the impact of the each method on the reconstructed images to choose the proper subset construction method. A Compton camera was consisted of three pairs of scatterer and absorber detectors which were parallel to each other. The detector pairs were positioned along the x-, y-, and z-axes at the radial offset of 10 em. The 3-directional projection data of 5-cylinder software phantom (64x64x64 array, 1.56mm) was obtained from a Compton projector. In this study, we used the iterative reconstruction algorithms such as EM and OSEM. For application of OSEM algorithm to the Compton camera, we proposed three strategies for constructing the exclusive subsets; scattering angle-based subsets (OSEM-SA), detector-position-based subsets (OSEM-DP), and both scattering angle- and detector-position-based subsets (OSEM-AP). The OSEM with 16, 64 and 128 subsets were performed through 16, 4, and 2 iterations, respectively. The OSEM with 16 subsets and 4 iterations was equivalent to the EM with 64 iterations, but the computation time was approximately reduced by 14 times. All the three schemes for choosing the subsets in OSEM algorithm yielded similar results in computation time, but the percent error for OSEM-SA was slightly larger than others. No significant change of percent error as the subset number increased up to 128 was observed. The simulation results showed that the EM reconstruction algorithm is applicable to the Compton camera data with sufficient counting statistics. OSEM significantly improved the computational efficiency and maintained the image quality of the standard EM reconstruction. The OSEM algorithm which included subsets on the detection positions (OSEM-DP and OSEM-AP) provided slightly better results than OSEM-SA.

  10. Subset construction strategy for ordered-subset expectation-maximization reconstruction in compton camera imaging system

    International Nuclear Information System (INIS)

    In this study, the expectation maximization (EM) and ordered subset EM (OSEM) reconstruction algorithms have been applied to the Compton projection data. For OSEM, we propose the several methods for constructing subsets and compare the impact of the each method on the reconstructed images to choose the proper subset construction method. A Compton camera was consisted of three pairs of scatterer and absorber detectors which were parallel to each other. The detector pairs were positioned along the x-, y-, and z-axes at the radial offset of 10 em. The 3-directional projection data of 5-cylinder software phantom (64x64x64 array, 1.56mm) was obtained from a Compton projector. In this study, we used the iterative reconstruction algorithms such as EM and OSEM. For application of OSEM algorithm to the Compton camera, we proposed three strategies for constructing the exclusive subsets; scattering angle-based subsets (OSEM-SA), detector-position-based subsets (OSEM-DP), and both scattering angle- and detector-position-based subsets (OSEM-AP). The OSEM with 16, 64 and 128 subsets were performed through 16, 4, and 2 iterations, respectively. The OSEM with 16 subsets and 4 iterations was equivalent to the EM with 64 iterations, but the computation time was approximately reduced by 14 times. All the three schemes for choosing the subsets in OSEM algorithm yielded similar results in computation time, but the percent error for OSEM-SA was slightly larger than others. No significant change of percent error as the subset number increased up to 128 was observed. The simulation results showed that the EM reconstruction algorithm is applicable to the Compton camera data with sufficient counting statistics. OSEM significantly improved the computational efficiency and maintained the image quality of the standard EM reconstruction. The OSEM algorithm which included subsets on the detection positions (OSEM-DP and OSEM-AP) provided slightly better results than OSEM-SA

  11. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    Science.gov (United States)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  12. A Bayesian Analysis of GPS Guidance System in Precision Agriculture: The Role of Expectations

    OpenAIRE

    Khanal, Aditya R; Mishra, Ashok K.; Lambert, Dayton M.; Paudel, Krishna P.

    2013-01-01

    Farmer’s post adoption responses about technology are important in continuation and diffusion of a technology in precision agriculture. We studied farmer’s frequency of application decisions of GPS guidance system, after adoption. Using a Cotton grower’s precision farming survey in the U.S. and Bayesian approaches, our study suggests that ‘meeting expectation’ plays an important positive role. Farmer’s income level, farm size, and farming occupation are other important factors in modeling GPS...

  13. Trend analysis of the power law process using Expectation-Maximization algorithm for data censored by inspection intervals

    International Nuclear Information System (INIS)

    Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.

  14. Trend analysis of the power law process using Expectation-Maximization algorithm for data censored by inspection intervals

    Energy Technology Data Exchange (ETDEWEB)

    Taghipour, Sharareh, E-mail: sharareh@mie.utoronto.ca [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ont., M5S 3G8 (Canada); Banjevic, Dragan [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ont., M5S 3G8 (Canada)

    2011-10-15

    Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.

  15. A Study on Many-Objective Optimization Using the Kriging-Surrogate-Based Evolutionary Algorithm Maximizing Expected Hypervolume Improvement

    Directory of Open Access Journals (Sweden)

    Chang Luo

    2015-01-01

    Full Text Available The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA, which maximizes expected hypervolume improvement (EHVI for updating the Kriging model, is investigated and compared with those using expected improvement (EI and estimation (EST updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.

  16. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    Energy Technology Data Exchange (ETDEWEB)

    Endrizzi, M., E-mail: m.endrizzi@ucl.ac.uk [Dipartimento di Fisica, Università di Siena, Via Roma 56, 53100 Siena (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Delogu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dipartimento di Fisica “E. Fermi”, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Oliva, P. [Dipartimento di Chimica e Farmacia, Università di Sassari, via Vienna 2, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, s.p. per Monserrato-Sestu Km 0.700, 09042 Monserrato (Italy)

    2014-12-01

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available.

  17. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    International Nuclear Information System (INIS)

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available

  18. Patch-based augmentation of Expectation-Maximization for brain MRI tissue segmentation at arbitrary age after premature birth.

    Science.gov (United States)

    Liu, Mengyuan; Kitsch, Averi; Miller, Steven; Chau, Vann; Poskitt, Kenneth; Rousseau, Francois; Shaw, Dennis; Studholme, Colin

    2016-02-15

    Accurate automated tissue segmentation of premature neonatal magnetic resonance images is a crucial task for quantification of brain injury and its impact on early postnatal growth and later cognitive development. In such studies it is common for scans to be acquired shortly after birth or later during the hospital stay and therefore occur at arbitrary gestational ages during a period of rapid developmental change. It is important to be able to segment any of these scans with comparable accuracy. Previous work on brain tissue segmentation in premature neonates has focused on segmentation at specific ages. Here we look at solving the more general problem using adaptations of age specific atlas based methods and evaluate this using a unique manually traced database of high resolution images spanning 20 gestational weeks of development. We examine the complimentary strengths of age specific atlas-based Expectation-Maximization approaches and patch-based methods for this problem and explore the development of two new hybrid techniques, patch-based augmentation of Expectation-Maximization with weighted fusion and a spatial variability constrained patch search. The former approach seeks to combine the advantages of both atlas- and patch-based methods by learning from the performance of the two techniques across the brain anatomy at different developmental ages, while the latter technique aims to use anatomical variability maps learnt from atlas training data to locally constrain the patch-based search range. The proposed approaches were evaluated using leave-one-out cross-validation. Compared with the conventional age specific atlas-based segmentation and direct patch based segmentation, both new approaches demonstrate improved accuracy in the automated labeling of cortical gray matter, white matter, ventricles and sulcal cortical-spinal fluid regions, while maintaining comparable results in deep gray matter. PMID:26702777

  19. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  20. Iterative three-dimensional expectation maximization restoration of single photon emission computed tomography images: Application in striatal imaging

    International Nuclear Information System (INIS)

    Single photon emission computed tomography imaging suffers from poor spatial resolution and high statistical noise. Consequently, the contrast of small structures is reduced, the visual detection of defects is limited and precise quantification is difficult. To improve the contrast, it is possible to include the spatially variant point spread function of the detection system into the iterative reconstruction algorithm. This kind of method is well known to be effective, but time consuming. We have developed a faster method to account for the spatial resolution loss in three dimensions, based on a postreconstruction restoration method. The method uses two steps. First, a noncorrected iterative ordered subsets expectation maximization (OSEM) reconstruction is performed and, in the second step, a three-dimensional (3D) iterative maximum likelihood expectation maximization (ML-EM) a posteriori spatial restoration of the reconstructed volume is done. In this paper, we compare to the standard OSEM-3D method, in three studies (two in simulation and one from experimental data). In the two first studies, contrast, noise, and visual detection of defects are studied. In the third study, a quantitative analysis is performed from data obtained with an anthropomorphic striatal phantom filled with 123-I. From the simulations, we demonstrate that contrast as a function of noise and lesion detectability are very similar for both OSEM-3D and OSEM-R methods. In the experimental study, we obtained very similar values of activity-quantification ratios for different regions in the brain. The advantage of OSEM-R compared to OSEM-3D is a substantial gain of processing time. This gain depends on several factors. In a typical situation, for a 128x128 acquisition of 120 projections, OSEM-R is 13 or 25 times faster than OSEM-3D, depending on the calculation method used in the iterative restoration. In this paper, the OSEM-R method is tested with the approximation of depth independent

  1. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect. PMID:26758232

  2. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm

    Science.gov (United States)

    Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  3. Hemodynamic segmentation of brain perfusion images with delay and dispersion effects using an expectation-maximization algorithm.

    Science.gov (United States)

    Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te

    2013-01-01

    Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386

  4. Multi-Dimensional Features Reduction of Consistency Subset Evaluator on Unsupervised Expectation Maximization Classifier for Imaging Surveillance Application

    Directory of Open Access Journals (Sweden)

    Chue-Poh TAN

    2008-02-01

    Full Text Available This paper presents the application of multi dimensional feature reduction ofConsistency Subset Evaluator (CSE and Principal Component Analysis (PCAand Unsupervised Expectation Maximization (UEM classifier for imagingsurveillance system. Recently, research in image processing has raised muchinterest in the security surveillance systems community. Weapon detection is oneof the greatest challenges facing by the community recently. In order toovercome this issue, application of the UEM classifier is performed to focus onthe need of detecting dangerous weapons. However, CSE and PCA are used toexplore the usefulness of each feature and reduce the multi dimensional featuresto simplified features with no underlying hidden structure. In this paper, we takeadvantage of the simplified features and classifier to categorize images objectwith the hope to detect dangerous weapons effectively. In order to validate theeffectiveness of the UEM classifier, several classifiers are used to compare theoverall accuracy of the system with the compliment from the features reduction ofCSE and PCA. These unsupervised classifiers include Farthest First, DensitybasedClustering and k-Means methods. The final outcome of this researchclearly indicates that UEM has the ability in improving the classification accuracyusing the extracted features from the multi-dimensional feature reduction of CSE.Besides, it is also shown that PCA is able to speed-up the computational timewith the reduced dimensionality of the features compromising the slight decreaseof accuracy.

  5. Hemodynamic Segmentation of Brain Perfusion Images with Delay and Dispersion Effects Using an Expectation-Maximization Algorithm

    Science.gov (United States)

    Lu, Chia-Feng; Guo, Wan-Yuo; Chang, Feng-Chi; Huang, Shang-Ran; Chou, Yen-Chun; Wu, Yu-Te

    2013-01-01

    Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified. PMID:23894386

  6. Hemodynamic segmentation of brain perfusion images with delay and dispersion effects using an expectation-maximization algorithm.

    Directory of Open Access Journals (Sweden)

    Chia-Feng Lu

    Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.

  7. Evaluation of median filtering after reconstruction with maximum likelihood expectation maximization (ML-EM) by real space and frequency space

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Keiichi; Fujita, Toru; Oogari, Koji [Kyoto Univ. (Japan). Hospital

    2002-05-01

    Maximum likelihood expectation maximization (ML-EM) image quality is sensitive to the number of iterations, because a large number of iterations leads to images with checkerboard noise. The use of median filtering in the reconstruction process allows both noise reduction and edge preservation. We examined the value of median filtering after reconstruction with ML-EM by comparing filtered back projection (FBP) with a ramp filter or ML-EM without filtering. SPECT images were obtained with a dual-head gamma camera. The acquisition time was changed from 10 to 200 (seconds/frame) to examine the effect of the count statistics on the quality of the reconstructed images. First, images were reconstructed with ML-EM by changing the number of iterations from 1 to 150 in each study. Additionally, median filtering was applied following reconstruction with ML-EM. The quality of the reconstructed images was evaluated in terms of normalized mean square error (NMSE) values and two-dimensional power spectrum analysis. Median filtering after reconstruction by the ML-EM method provided stable NMSE values even when the number of iterations was increased. The signal element of the image was close to the reference image for any repetition number of iterations. Median filtering after reconstruction with ML-EM was useful in reducing noise, with a similar resolution achieved by reconstruction with FBP and a ramp filter. Especially in images with poor count statistics, median filtering after reconstruction with ML-EM is effective as a simple, widely available method. (author)

  8. Increase of Tc-99m RBC SPECT sensitivity for small liver hemangioma using ordered subset expectation maximization technique

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Tae Joo; Bong, Jung Kyun; Kim, Hee Joung; Kim, Myung Jin; Lee, Jong Doo [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2002-12-01

    RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile. OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. There experts reviewed theses images without any information. According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

  9. Genetic interaction motif finding by expectation maximization – a novel statistical model for inferring gene modules from synthetic lethality

    Directory of Open Access Journals (Sweden)

    Ye Ping

    2005-12-01

    Full Text Available Abstract Background Synthetic lethality experiments identify pairs of genes with complementary function. More direct functional associations (for example greater probability of membership in a single protein complex may be inferred between genes that share synthetic lethal interaction partners than genes that are directly synthetic lethal. Probabilistic algorithms that identify gene modules based on motif discovery are highly appropriate for the analysis of synthetic lethal genetic interaction data and have great potential in integrative analysis of heterogeneous datasets. Results We have developed Genetic Interaction Motif Finding (GIMF, an algorithm for unsupervised motif discovery from synthetic lethal interaction data. Interaction motifs are characterized by position weight matrices and optimized through expectation maximization. Given a seed gene, GIMF performs a nonlinear transform on the input genetic interaction data and automatically assigns genes to the motif or non-motif category. We demonstrate the capacity to extract known and novel pathways for Saccharomyces cerevisiae (budding yeast. Annotations suggested for several uncharacterized genes are supported by recent experimental evidence. GIMF is efficient in computation, requires no training and automatically down-weights promiscuous genes with high degrees. Conclusion GIMF effectively identifies pathways from synthetic lethality data with several unique features. It is mostly suitable for building gene modules around seed genes. Optimal choice of one single model parameter allows construction of gene networks with different levels of confidence. The impact of hub genes the generic probabilistic framework of GIMF may be used to group other types of biological entities such as proteins based on stochastic motifs. Analysis of the strongest motifs discovered by the algorithm indicates that synthetic lethal interactions are depleted between genes within a motif, suggesting that synthetic

  10. Hybrid metaheuristic approaches to the expectation maximization for estimation of the hidden Markov model for signal modeling.

    Science.gov (United States)

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2014-10-01

    The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well. PMID:24686310

  11. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  12. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  13. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction.

    Science.gov (United States)

    Karakatsanis, Nicolas A; Casey, Michael E; Lodge, Martin A; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible (18)F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published (18)F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were

  14. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Heike [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany); Pennec, Xavier; Ayache, Nicholas [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); Ehrhardt, Jan; Handels, Heinz [University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany)

    2008-03-15

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of 'generalization ability' and &apos

  15. MaxBin: an automated binning method to recover individual genomes from metagenomes using an expectation-maximization algorithm

    Science.gov (United States)

    2014-01-01

    Background Recovering individual genomes from metagenomic datasets allows access to uncultivated microbial populations that may have important roles in natural and engineered ecosystems. Understanding the roles of these uncultivated populations has broad application in ecology, evolution, biotechnology and medicine. Accurate binning of assembled metagenomic sequences is an essential step in recovering the genomes and understanding microbial functions. Results We have developed a binning algorithm, MaxBin, which automates the binning of assembled metagenomic scaffolds using an expectation-maximization algorithm after the assembly of metagenomic sequencing reads. Binning of simulated metagenomic datasets demonstrated that MaxBin had high levels of accuracy in binning microbial genomes. MaxBin was used to recover genomes from metagenomic data obtained through the Human Microbiome Project, which demonstrated its ability to recover genomes from real metagenomic datasets with variable sequencing coverages. Application of MaxBin to metagenomes obtained from microbial consortia adapted to grow on cellulose allowed genomic analysis of new, uncultivated, cellulolytic bacterial populations, including an abundant myxobacterial population distantly related to Sorangium cellulosum that possessed a much smaller genome (5 MB versus 13 to 14 MB) but has a more extensive set of genes for biomass deconstruction. For the cellulolytic consortia, the MaxBin results were compared to binning using emergent self-organizing maps (ESOMs) and differential coverage binning, demonstrating that it performed comparably to these methods but had distinct advantages in automation, resolution of related genomes and sensitivity. Conclusions The automatic binning software that we developed successfully classifies assembled sequences in metagenomic datasets into recovered individual genomes. The isolation of dozens of species in cellulolytic microbial consortia, including a novel species of

  16. Clinical evaluation of iterative reconstruction (ordered-subset expectation maximization) in dynamic positron emission tomography: quantitative effects on kinetic modeling with N-13 ammonia in healthy subjects

    DEFF Research Database (Denmark)

    Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob; Holm, Søren; Kelbaek, Henning; Kofoed, Klaus

    2008-01-01

    BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron e...

  17. Dividing organelle tracks into Brownian and motor-driven intervals by variational maximization of the Bayesian evidence.

    Science.gov (United States)

    Martin, Matthew J; Smelser, Amanda M; Holzwarth, George

    2016-04-01

    Many organelles and vesicles in live cells move in a start-stop manner when observed for ~10 s by optical microscopy. Changes in velocity and directional persistence of such particles are a potentially rich source of insight into the mechanisms leading to the start and stop states. Unbiased assessment of the most probable number of states, the properties of each state, and the most probable state for the particle at each moment can be accomplished by variational Bayesian methods combined with a hidden Markov model and a Gaussian mixture model. Our track analysis method, "vbTRACK", applied this combination of methods to particle velocity v or changes in the direction of travel evaluated from simulated tracks and from tracks of peroxisomes in live cells. When tested with numerical data, vbTRACK reliably determined the number of states, the mean and variance of the velocity or the direction of travel for each state, and the most probable state during each frame. When applied to the tracks of peroxisomes in live cells, some tracks separated into two states, one with high velocity and directionality, the other approximately Brownian. Other tracks of particles in live cells separated into several diffusive states with distinct diffusion constants. PMID:26538332

  18. A Bayesian Approach to Interactive Retrieval

    Science.gov (United States)

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  19. Deriving the largest expected number of elementary particles in the standard model from the maximal compact subgroup H of the exceptional Lie group E7(-5)

    International Nuclear Information System (INIS)

    The maximal number of elementary particles which could be expected to be found within a modestly extended energy scale of the standard model was found using various methods to be N = 69. In particular using E-infinity theory the present Author found the exact transfinite expectation value to be =α-baro/2≅69 where α-baro=137.082039325 is the exact inverse fine structure constant. In the present work we show among other things how to derive the exact integer value 69 from the exceptional Lie symmetry groups hierarchy. It is found that the relevant number is given by dim H = 69 where H is the maximal compact subspace of E7(-5) so that N = dim H = 69 while dim E7 = 133

  20. Non-linear spatio-temporal filtering of dynamic PET data using a 4-dimensional Gaussian filter and expectation-maximization deconvolution

    OpenAIRE

    Floberg, J M; Holden, J.E.

    2013-01-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines 4-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without intro...

  1. Maximizing Expected Base Pair Accuracy in RNA Secondary Structure Prediction by Joining Stochastic Context-Free Grammars Method

    Directory of Open Access Journals (Sweden)

    Shahira M. Habashy

    2012-03-01

    Full Text Available The identification of RNA secondary structures has been among the most exciting recent developments in biology and medical science. Prediction of RNA secondary structure is a fundamental problem in computational structural biology. For several decades, free energy minimization has been the most popular method for prediction from a single sequence. It is based on a set of empirical free energy change parameters derived from experiments using a nearest-neighbor model. Accurate prediction of RNA secondary structure from the base sequence is an unsolved computational challenge. The accuracy of predictions made by free energy minimization is limited by the quality of the energy parameters in the underlying free energy model. More recently, stochastic context-free grammars (SCFGs have emerged as an alternative probabilistic methodology for modeling RNA structure. Unlike physics-based methods, which rely on thousands of experimentally -measured thermodynamic parameters, SCFGs use fully-automated statistical learning algorithms to derive model parameters. This paper proposes a new algorithm that computes base pairing pattern for RNA molecule. Complex internal structures in RNA are fully taken into account. It supports the calculation of stochastic context-free grammars (SCFGs, and equilibrium concentrations of duplex structures. This new algorithm is compared with dynamic programming benchmark mfold and algorithms (Tfold, and MaxExpect. The results showed that the proposed algorithm achieved better performance with respect to sensitivity and positive predictive value.

  2. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456

  3. Results of clinical receiver operating characteristic study comparing ordered subset expectation maximization and filtered back projection images in FDG PET studies of hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Tae Joo; Lee, Jong Doo; Kim, Hee Joung; Kim, Myung Jin; Yoo, Hyung Sik [College of Medicine, Yonsei Univ., Seoul (Korea, Republic of)

    2000-07-01

    The aims of this study is to validate the usefulness of ordered subset expectation maximization (OSEM) comparing with filtered back projection in terms of diagnostic ability for hepatocellular carcinoma (HCC). The data of fifty seven patients with HCC and 62 patients with normal liver was reconstructed using both OSEM and FBP. Mean age of the patients group was 54.4{+-}1.5 year. All patient underwent whole body and liver scan after the injection of 10 mCi of (F-18)FDG using dedicated whole body PET camera (GE, Advance). Interpretation of PET images were performed by 3 observers with random exposure of normal and diseased cases. Receiver operator characteristic (ROC) study was used for the validation of results. The area of ROC curve. Az was represented as below and this results revealed statistical differences (p<0.05). In PET studies of patients with HCC, OSEM showed better results that those of conventional FBP in terms of lesion detectability.

  4. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    International Nuclear Information System (INIS)

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. (paper)

  5. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    International Nuclear Information System (INIS)

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment

  6. The comparison of ordered subset expectation maximization and filtered back projection technique for RBC blood pool SPECT in detection of liver hemangioma

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Tae Joo; Kim, Hee Joung; Bong, Jung Kyun; Lee, Jong Doo [College of Medicine, Yonsei Univ., Seoul (Korea, Republic of)

    2000-07-01

    Odered subset expectation maximization (OSEM) is a new iterative reconstruction technique for tomographic images that can reduce the reconstruction time comparing with conventional iteration method. We adopted this method of RBC blood pool SPECT and tried to validate the usefulness of OSEM in detection of liver hemangioma comparing with filtered back projection (FBP). A 64 projection SPECT study was acquired over 360 .deg. C by dual-head cameras after the injection of 750MBq of {sup 99m}Tc-RBC. OSEM was performed with various condition of subset (1,2,4,8,16 and 32) and iteration number (1,2,4,8 and 16) to obtain the best set for lesion detection. OSEM underwent in 17 lesions of 15 patients with liver hemangioma and compared with FBP images. Two nuclear medicine physicians reviewed these results independently. Best set for images was 4 iteration and 16 subset. In general, OSEM revealed more homogeneous images than FBP. Eighty-eight percent (15/17) of OSEM images were superior or equal to FBP for anatomic resolution. According to the blind review of images 70.5% (12/17) of OSEM was better in contrast (4/17), anatomic detail (4/17) and both (2/17). Two small lesions were detected by OSEM only and another 2 small lesions were failed to depict in both methods. Remaining 3 lesions revealed no difference in image quality. OSEM can provide better image quality as well as better results in detection of liver hemangioma than conventional FBP technique.

  7. Population genetic analysis of bi-allelic structural variants from low-coverage sequence data with an expectation-maximization algorithm

    Science.gov (United States)

    2014-01-01

    Background Population genetics and association studies usually rely on a set of known variable sites that are then genotyped in subsequent samples, because it is easier to genotype than to discover the variation. This is also true for structural variation detected from sequence data. However, the genotypes at known variable sites can only be inferred with uncertainty from low coverage data. Thus, statistical approaches that infer genotype likelihoods, test hypotheses, and estimate population parameters without requiring accurate genotypes are becoming popular. Unfortunately, the current implementations of these methods are intended to analyse only single nucleotide and short indel variation, and they usually assume that the two alleles in a heterozygous individual are sampled with equal probability. This is generally false for structural variants detected with paired ends or split reads. Therefore, the population genetics of structural variants cannot be studied, unless a painstaking and potentially biased genotyping is performed first. Results We present svgem, an expectation-maximization implementation to estimate allele and genotype frequencies, calculate genotype posterior probabilities, and test for Hardy-Weinberg equilibrium and for population differences, from the numbers of times the alleles are observed in each individual. Although applicable to single nucleotide variation, it aims at bi-allelic structural variation of any type, observed by either split reads or paired ends, with arbitrarily high allele sampling bias. We test svgem with simulated and real data from the 1000 Genomes Project. Conclusions svgem makes it possible to use low-coverage sequencing data to study the population distribution of structural variants without having to know their genotypes. Furthermore, this advance allows the combined analysis of structural and nucleotide variation within the same genotype-free statistical framework, thus preventing biases introduced by genotype

  8. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, 1111 Highland Ave, Madison, WI 53705 (United States); Flynn, Ryan T [Department of Radiation Oncology, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52245 (United States)], E-mail: barbee.david@gmail.com

    2010-01-07

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of {+-}30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV

  9. Is there a role for expectation maximization imputation in addressing missing data in research using WOMAC questionnaire? Comparison to the standard mean approach and a tutorial

    Directory of Open Access Journals (Sweden)

    Rutledge John

    2011-05-01

    Full Text Available Abstract Background Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM and the mean imputation method for WOMAC in a cohort of total hip replacement patients. Methods WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC analysis and the true score (TS. This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%. Results Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased. Conclusions The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate

  10. Interest of the ordered subsets expectation maximization (OS-EM) algorithm in pinhole single-photon emission tomography reconstruction: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Vanhove, C.; Franken, P.R.; Everaert, H.; Bossuyt, A. [Div. of Nuclear Medicine, University Hospital, Free University of Brussels (AZ VUB), Brussels (Belgium); Defrise, M.; Deconinck, F. [Division of Experimental Medical Imaging, Free University of Brussels (VUB), Brussels (Belgium)

    2000-02-01

    Pinhole single-photon emission tomography (SPET) has been proposed to improve the trade-off between sensitivity and resolution for small organs located in close proximity to the pinhole aperture. This technique is hampered by artefacts in the non-central slices. These artefacts are caused by truncation and by the fact that the pinhole SPET data collected in a circular orbit do not contain sufficient information for exact reconstruction. The ordered subsets expectation maximization (OS-EM) algorithm is a potential solution to these problems. In this study a three-dimensional OS-EM algorithm was implemented for data acquired on a single-head gamma camera equipped with a pinhole collimator (PH OS-EM). The aim of this study was to compare the PH OS-EM algorithm with the filtered back-projection algorithm of Feldkamp, Davis and Kress (FDK) and with the conventional parallel-hole geometry as a whole, using a line source phantom, Picker's thyroid phantom and a phantom mimicking the human cervical column. Correction for the angular dependency of the sensitivity in the pinhole geometry was based on a uniform flood acquisition. The projection data were shifted according to the measured centre of rotation. No correction was made for attenuation, scatter or distance-dependent camera resolution. The resolution measured with the line source phantom showed a significant improvement with PH OS-EM as compared with FDK, especially in the axial direction. Using Picker's thyroid phantom, one iteration with eight subsets was sufficient to obtain images with similar noise levels in uniform regions of interest to those obtained with the FDK algorithm. With these parameters the reconstruction time was 2.5 times longer than for the FDK method. Furthermore, there was a reduction in the artefacts caused by the circular orbit SPET acquisition. The images obtained from the phantom mimicking the human cervical column indicated that the improvement in image quality with PH OS-EM is

  11. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  12. From Wald to Savage: homo economicus becomes a Bayesian statistician

    OpenAIRE

    Giocoli, Nicola

    2011-01-01

    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. A rational agent in an economic model is one who maximizes her subjective expected utility and consistently revises her beliefs according to Bayes’s rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is far from trivial and of great historiographic im...

  13. Optimal Expectation

    OpenAIRE

    Brunnermeier, Markus K.; Jonathan A. Parker

    2004-01-01

    This Paper introduces a tractable, structural model of subjective beliefs. Forward-looking agents care about expected future utility flows, and hence have higher current felicity if they believe that better outcomes are more likely. On the other hand, biased expectations lead to poorer decisions and worse realized outcomes on average. Optimal expectations balance these forces by maximizing average felicity. A small bias in beliefs typically leads to first-order gains due to increased anticipa...

  14. Bayesian state estimation using generalized coordinates

    Science.gov (United States)

    Balaji, Bhashyam; Friston, Karl

    2011-06-01

    This paper reviews a simple solution to the continuous-discrete Bayesian nonlinear state estimation problem that has been proposed recently. The key ideas are analytic noise processes, variational Bayes, and the formulation of the problem in terms of generalized coordinates of motion. Some of the algorithms, specifically dynamic expectation maximization and variational filtering, have been shown to outperform existing approaches like extended Kalman filtering and particle filtering. A pedagogical review of the theoretical formulation is presented, with an emphasis on concepts that are not as widely known in the filtering literature. We illustrate the appliction of these concepts using a numerical example.

  15. A Bayesian Probabilistic Framework for Rain Detection

    Directory of Open Access Journals (Sweden)

    Chen Yao

    2014-06-01

    Full Text Available Heavy rain deteriorates the video quality of outdoor imaging equipments. In order to improve video clearness, image-based and sensor-based methods are adopted for rain detection. In earlier literature, image-based detection methods fall into spatio-based and temporal-based categories. In this paper, we propose a new image-based method by exploring spatio-temporal united constraints in a Bayesian framework. In our framework, rain temporal motion is assumed to be Pathological Motion (PM, which is more suitable to time-varying character of rain steaks. Temporal displaced frame discontinuity and spatial Gaussian mixture model are utilized in the whole framework. Iterated expectation maximization solving method is taken for Gaussian parameters estimation. Pixels state estimation is finished by an iterated optimization method in Bayesian probability formulation. The experimental results highlight the advantage of our method in rain detection.

  16. Bayesian Network Based Fault Prognosis via Bond Graph Modeling of High-Speed Railway Traction Device

    Directory of Open Access Journals (Sweden)

    Yunkai Wu

    2015-01-01

    component-level faults accurately for a high-speed railway traction system, a fault prognosis approach via Bayesian network and bond graph modeling techniques is proposed. The inherent structure of a railway traction system is represented by bond graph model, based on which a multilayer Bayesian network is developed for fault propagation analysis and fault prediction. For complete and incomplete data sets, two different parameter learning algorithms such as Bayesian estimation and expectation maximization (EM algorithm are adopted to determine the conditional probability table of the Bayesian network. The proposed prognosis approach using Pearl’s polytree propagation algorithm for joint probability reasoning can predict the failure probabilities of leaf nodes based on the current status of root nodes. Verification results in a high-speed railway traction simulation system can demonstrate the effectiveness of the proposed approach.

  17. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Science.gov (United States)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  18. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  19. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    International Nuclear Information System (INIS)

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm

  20. Sparse Bayesian learning in ISAR tomography imaging

    Institute of Scientific and Technical Information of China (English)

    SU Wu-ge; WANG Hong-qiang; DENG Bin; WANG Rui-jun; QIN Yu-liang

    2015-01-01

    Inverse synthetic aperture radar (ISAR) imaging can be regarded as a narrow-band version of the computer aided tomography (CT). The traditional CT imaging algorithms for ISAR, including the polar format algorithm (PFA) and the convolution back projection algorithm (CBP), usually suffer from the problem of the high sidelobe and the low resolution. The ISAR tomography image reconstruction within a sparse Bayesian framework is concerned. Firstly, the sparse ISAR tomography imaging model is established in light of the CT imaging theory. Then, by using the compressed sensing (CS) principle, a high resolution ISAR image can be achieved with limited number of pulses. Since the performance of existing CS-based ISAR imaging algorithms is sensitive to the user parameter, this makes the existing algorithms inconvenient to be used in practice. It is well known that the Bayesian formalism of recover algorithm named sparse Bayesian learning (SBL) acts as an effective tool in regression and classification, which uses an efficient expectation maximization procedure to estimate the necessary parameters, and retains a preferable property of thel0-norm diversity measure. Motivated by that, a fully automated ISAR tomography imaging algorithm based on SBL is proposed. Experimental results based on simulated and electromagnetic (EM) data illustrate the effectiveness and the superiority of the proposed algorithm over the existing algorithms.

  1. Entropy Maximization

    Indian Academy of Sciences (India)

    K B Athreya

    2009-09-01

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.

  2. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  3. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Jisheng Dai

    2015-10-01

    Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  4. Bayesian Thought in Early Modern Detective Stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes

    OpenAIRE

    Kadane, Joseph B.

    2009-01-01

    This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms.

  5. Bayesian Thought in Early Modern Detective Stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes

    CERN Document Server

    Kadane, Joseph B

    2010-01-01

    This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms.

  6. Bayesian statistics

    OpenAIRE

    Draper, D.

    2001-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  7. Regularized variational Bayesian learning of echo state networks with delay&sum readout.

    Science.gov (United States)

    Shutin, Dmitriy; Zechner, Christoph; Kulkarni, Sanjeev R; Poor, H Vincent

    2012-04-01

    In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for "explaining" the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition. PMID:22168555

  8. Expectation-maximization (EM) Algorithm Based on IMM Filtering with Adaptive Noise Covariance%基于期望最大化算法的自适应噪声交互多模型滤波

    Institute of Scientific and Technical Information of China (English)

    雷明; 韩崇昭

    2006-01-01

    A novel method under the interactive multiple model (IMM) filtering framework is presented in this paper, in which the expectation-maximization (EM) algorithm is used to identify the process noise covariance Q online. For the existing IMM filtering theory, the matrix Q is determined by means of design experience, but Q is actually changed with the state of the maneuvering target.Meanwhile it is severely influenced by the environment around the target, i.e., it is a variable of time. Therefore, the experiential covariance Q can not represent the influence of state noise in the maneuvering process exactly. Firstly, it is assumed that the evolved state and the initial conditions of the system can be modeled by using Gaussian distribution, although the dynamic system is of a nonlinear measurement equation, and furthermore the EM algorithm based on IMM filtering with the Q identification online is proposed. Secondly, the truncated error analysis is performed. Finally,the Monte Carlo simulation results are given to show that the proposed algorithm outperforms the existing algorithms and the tracking precision for the maneuvering targets is improved efficiently.

  9. 基于期望最大化框架的医学超声图像去斑%Despeckling medical ultrasound images based on an expectation maximization framework

    Institute of Scientific and Technical Information of China (English)

    侯涛; 汪源源; 郭翌

    2011-01-01

    In view of inherent speckle noise in medical images, a de-speckling method was proposed based on an expectation maximization (EM) framework. Firstly, the real part was extracted from the Inphase/Quadrature (I/Q)ultrasound image. Then, the point spread function was blindly estimated from the real image. Lastly, based on the EM framework, an iterative algorithm alternating between Wiener filtering and anisotropic diffusion was exploited to produce de-speckled images. The comparison experiment was carried out on both simulated and in vitro ultrasound images using the proposed method and exited ones, respectively. It was shown that the proposed method averagely improved the speckle-signal-to-noise Ratio (S-SNR) and the edge preservation index (β) of I/Q images by the factor of 1.94 and 7.52. Meanwhile, it averagely reduced the normalized mean-squared error (NMSE) by the factor of 3.95. The simulation and in vitro results indicated that the proposed method had a better overall performance than exited ones.%针对医学超声图像斑点噪声,提出一种基于期望最大化(EM)框架的去斑算法.先从超声I/Q图像中提取实部;然后从该实部图像中"盲估计"出系统的点扩散函数;最后利用EM算法,在维纳滤波和各向异性扩散间进行迭代,从而获得去斑后的超声图像.对不同信噪比的仿真图像和实际图像采用本文方法和现有方法进行比较实验,结果表明,采用本文方法可将超声图像的斑点信噪比和边界保留指数平均提高1.94和7.52倍,归一化均方差平均降低3.95倍,性能指标优于现有方法.

  10. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  11. On maximal massive 3D supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Bergshoeff, Eric A; Rosseel, Jan [Centre for Theoretical Physics, University of Groningen, Nijenborgh 4, 9747 AG Groningen (Netherlands); Hohm, Olaf [Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Townsend, Paul K, E-mail: E.A.Bergshoeff@rug.n, E-mail: ohohm@mit.ed, E-mail: j.rosseel@rug.n, E-mail: P.K.Townsend@damtp.cam.ac.u [Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2010-12-07

    We construct, at the linearized level, the three-dimensional (3D) N=4 supersymmetric 'general massive supergravity' and the maximally supersymmetric N=8 'new massive supergravity'. We also construct the maximally supersymmetric linearized N=7 topologically massive supergravity, although we expect N=6 to be maximal at the nonlinear level.

  12. On Maximal Massive 3D Supergravity

    CERN Document Server

    Bergshoeff, Eric A; Rosseel, Jan; Townsend, Paul K

    2010-01-01

    We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric "general massive supergravity" and the maximally supersymmetric N = 8 "new massive supergravity". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level.

  13. On maximal massive 3D supergravity

    International Nuclear Information System (INIS)

    We construct, at the linearized level, the three-dimensional (3D) N=4 supersymmetric 'general massive supergravity' and the maximally supersymmetric N=8 'new massive supergravity'. We also construct the maximally supersymmetric linearized N=7 topologically massive supergravity, although we expect N=6 to be maximal at the nonlinear level.

  14. Profit Maximization over Social Networks

    OpenAIRE

    Lu, Wei; Lakshmanan, Laks V. S.

    2012-01-01

    Influence maximization is the problem of finding a set of influential users in a social network such that the expected spread of influence under a certain propagation model is maximized. Much of the previous work has neglected the important distinction between social influence and actual product adoption. However, as recognized in the management science literature, an individual who gets influenced by social acquaintances may not necessarily adopt a product (or technology), due, e.g., to mone...

  15. A Test of Profit Maximization

    OpenAIRE

    Asplund, Björn Marcus

    2007-01-01

    This paper aims at testing the maintained assumption that firms' objective is to maximize the expected net present value (ENPV) of profits. The idea is to examine pricing behaviour of a monopolist facing a dynamic demand where current sales influence future demand. Empirically, I estimate an Euler equation implied by maximization of ENPV of profits on data from the Swedish Tobacco Monopoly's sales of moist snuff (an addictive tobacco product) during the period 1917-1959. It is found that the ...

  16. Stochastic Expectation Propagation

    OpenAIRE

    Li, Yingzhen; Hernandez-Lobato, Jose Miguel; Turner, Richard E.

    2015-01-01

    Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it...

  17. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  18. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  19. BayesLCA: An R Package for Bayesian Latent Class Analysis

    Directory of Open Access Journals (Sweden)

    Arthur White

    2014-11-01

    Full Text Available The BayesLCA package for R provides tools for performing latent class analysis within a Bayesian setting. Three methods for fitting the model are provided, incorporating an expectation-maximization algorithm, Gibbs sampling and a variational Bayes approximation. The article briefly outlines the methodology behind each of these techniques and discusses some of the technical difficulties associated with them. Methods to remedy these problems are also described. Visualization methods for each of these techniques are included, as well as criteria to aid model selection.

  20. Bayesian semi-blind component separation for foreground removal in interferometric 21-cm observations

    CERN Document Server

    Zhang, Le; Karakci, Ata; Korotkov, Andrei; Sutter, P M; Timbie, Peter T; Tucker, Gregory S; Wandelt, Benjamin D

    2016-01-01

    We present in this paper a new Bayesian semi-blind approach for foreground removal in observations of the 21-cm signal with interferometers. The technique, which we call HIEMICA (HI Expectation-Maximization Independent Component Analysis), is an extension of the Independent Component Analysis (ICA) technique developed for two-dimensional (2D) CMB maps to three-dimensional (3D) 21-cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from signal based on the diversity of their power spectra. Only relying on the statistical independence of the components, this approach can jointly estimate the 3D power spectrum of the 21-cm signal and, the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21-cm intensity mapping observations. Based on ...

  1. Discriminative variable subsets in Bayesian classification with mixture models, with application in flow cytometry studies.

    Science.gov (United States)

    Lin, Lin; Chan, Cliburn; West, Mike

    2016-01-01

    We discuss the evaluation of subsets of variables for the discriminative evidence they provide in multivariate mixture modeling for classification. The novel development of Bayesian classification analysis presented is partly motivated by problems of design and selection of variables in biomolecular studies, particularly involving widely used assays of large-scale single-cell data generated using flow cytometry technology. For such studies and for mixture modeling generally, we define discriminative analysis that overlays fitted mixture models using a natural measure of concordance between mixture component densities, and define an effective and computationally feasible method for assessing and prioritizing subsets of variables according to their roles in discrimination of one or more mixture components. We relate the new discriminative information measures to Bayesian classification probabilities and error rates, and exemplify their use in Bayesian analysis of Dirichlet process mixture models fitted via Markov chain Monte Carlo methods as well as using a novel Bayesian expectation-maximization algorithm. We present a series of theoretical and simulated data examples to fix concepts and exhibit the utility of the approach, and compare with prior approaches. We demonstrate application in the context of automatic classification and discriminative variable selection in high-throughput systems biology using large flow cytometry datasets. PMID:26040910

  2. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  3. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    Science.gov (United States)

    Barat, Éric; Dautremer, Thomas

    2007-11-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution—normalized emission intensity of the spatial poisson process—is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on Rk (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM.

  4. Maximizing System Throughput by Cooperative Sensing in Cognitive Radio Networks

    CERN Document Server

    Li, Shuang; Ekici, Eylem; Shroff, Ness

    2011-01-01

    Cognitive Radio Networks allow unlicensed users to opportunistically access the licensed spectrum without causing disruptive interference to the primary users (PUs). One of the main challenges in CRNs is the ability to detect PU transmissions. Recent works have suggested the use of secondary user (SU) cooperation over individual sensing to improve sensing accuracy. In this paper, we consider a CRN consisting of a single PU and multiple SUs to study the problem of maximizing the total expected system throughput. We propose a Bayesian decision rule based algorithm to solve the problem optimally with a constant time complexity. To prioritize PU transmissions, we re-formulate the throughput maximization problem by adding a constraint on the PU throughput. The constrained optimization problem is shown to be NP-hard and solved via a greedy algorithm with pseudo-polynomial time complexity that achieves strictly greater than 1/2 of the optimal solution. We also investigate the case for which a constraint is put on th...

  5. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  6. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  7. Bayesian Network--Response Regression

    OpenAIRE

    WANG, LU; Durante, Daniele; Dunson, David B.

    2016-01-01

    There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...

  8. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838

  9. Optimal Bayesian Experimental Design for Combustion Kinetics

    KAUST Repository

    Huan, Xun

    2011-01-04

    Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.

  10. A new approach for Bayesian model averaging

    Institute of Scientific and Technical Information of China (English)

    TIAN XiangJun; XIE ZhengHui; WANG AiHui; YANG XiaoChun

    2012-01-01

    Bayesian model averaging (BMA) is a recently proposed statistical method for calibrating forecast ensembles from numerical weather models.However,successful implementation of BMA requires accurate estimates of the weights and variances of the individual competing models in the ensemble.Two methods,namely the Expectation-Maximization (EM) and the Markov Chain Monte Carlo (MCMC) algorithms,are widely used for BMA model training.Both methods have their own respective strengths and weaknesses.In this paper,we first modify the BMA log-likelihood function with the aim of removing the additional limitation that requires that the BMA weights add to one,and then use a limited memory quasi-Newtonian algorithm for solving the nonlinear optimization problem,thereby formulating a new approach for BMA (referred to as BMA-BFGS).Several groups of multi-model soil moisture simulation experiments from three land surface models show that the performance of BMA-BFGS is similar to the MCMC method in terms of simulation accuracy,and that both are superior to the EM algorithm.On the other hand,the computational cost of the BMA-BFGS algorithm is substantially less than for MCMC and is almost equivalent to that for EM.

  11. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  12. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  13. Review and expectation on development and use of Idesia polycarpa Maxim.var.vestita Diels in China%我国毛叶山桐子开发利用回顾和展望

    Institute of Scientific and Technical Information of China (English)

    吴全珍

    2011-01-01

    The Idesia polycarpa Maxim. var. vestita Diels belongs to wild fallen - leaf arbor, and it can be used as the oil -producing and lumber -supplying species. The oil from its fruit can be used as the raw materials for medicine, chemical and plant energy industry. The fruit output of this tree is large, and the oil content is high. The lumber is suitable for the furniture manufacture and board house. In addition, this tree is the favorable choice for environmental greening. The developing and using course were reviewed,and its future was prospected. The scientific research on its reproduction, cultivation, picking, storage, oil extraction, oil refining and deep processing of the oil products should be strengthened. Particularly, it was important to effectively get rid of oil's bitterness and unpleasant smell for its development and usage.%毛叶山桐子是野生落叶乔木,油、材兼用树种.果实中的油脂组分属健康营养食用油,是医药、化工和植物油能源原料.果实产量、含油率都较高,其木材是制造家俱和木板房的良材,该树也是绿化环境的优良树种.回顾了我国开发利用毛叶山桐子的历程,展望它的发展,建议在大规模种植的同时加强繁殖、栽培、采摘、储存、油脂提取、油脂精炼及油脂产品深加工及副产品综合利用等科学研究工作.特别是创新有效地去除油脂不良气味工艺是开发利用毛叶山桐子是否成功的关键.

  14. Bayesian Games with Intentions

    OpenAIRE

    Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael

    2016-01-01

    We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  15. A new perspective to rational expectations: Maximin rational expectations equilibrium

    OpenAIRE

    Castro, Luciano I.; Pesce, Marialaura; Nicholas C. Yannelis

    2010-01-01

    We introduce a new notion of rational expectations equilibrium (REE) called maximin rational expectations equilibrium (MREE), which is based on the maximin expected utility (MEU) formulation. In particular, agents maximize maximin expected utility conditioned on their own private information and the information that the equilibrium prices generate. Maximin equilibrium allocations need not to be measurable with respect to the private information of each individual and with respect to the infor...

  16. SAR imaging via iterative adaptive approach and sparse Bayesian learning

    Science.gov (United States)

    Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian

    2009-05-01

    We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.

  17. Bayesian analysis for extreme climatic events: A review

    Science.gov (United States)

    Chu, Pao-Shin; Zhao, Xin

    2011-11-01

    This article reviews Bayesian analysis methods applied to extreme climatic data. We particularly focus on applications to three different problems related to extreme climatic events including detection of abrupt regime shifts, clustering tropical cyclone tracks, and statistical forecasting for seasonal tropical cyclone activity. For identifying potential change points in an extreme event count series, a hierarchical Bayesian framework involving three layers - data, parameter, and hypothesis - is formulated to demonstrate the posterior probability of the shifts throughout the time. For the data layer, a Poisson process with a gamma distributed rate is presumed. For the hypothesis layer, multiple candidate hypotheses with different change-points are considered. To calculate the posterior probability for each hypothesis and its associated parameters we developed an exact analytical formula, a Markov Chain Monte Carlo (MCMC) algorithm, and a more sophisticated reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The algorithms are applied to several rare event series: the annual tropical cyclone or typhoon counts over the central, eastern, and western North Pacific; the annual extremely heavy rainfall event counts at Manoa, Hawaii; and the annual heat wave frequency in France. Using an Expectation-Maximization (EM) algorithm, a Bayesian clustering method built on a mixture Gaussian model is applied to objectively classify historical, spaghetti-like tropical cyclone tracks (1945-2007) over the western North Pacific and the South China Sea into eight distinct track types. A regression based approach to forecasting seasonal tropical cyclone frequency in a region is developed. Specifically, by adopting large-scale environmental conditions prior to the tropical cyclone season, a Poisson regression model is built for predicting seasonal tropical cyclone counts, and a probit regression model is alternatively developed toward a binary classification problem. With a non

  18. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price co...

  19. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  20. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  1. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its......The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of...

  2. Tactile length contraction as Bayesian inference.

    Science.gov (United States)

    Tong, Jonathan; Ngo, Vy; Goldreich, Daniel

    2016-08-01

    To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process. PMID:27121574

  3. Phenomenology of Maximal and Near-Maximal Lepton Mixing

    CERN Document Server

    González-Garciá, M Concepción; Nir, Yosef; Smirnov, Yu A

    2001-01-01

    We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other ($x$=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter $\\epsilon\\equiv1-2\\sin^2\\theta_{ex}$ and quantify the present experimental status for $|\\epsilon|<0.3$. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for $10^{-8}$ eV$^2\\lsim\\Delta m^2\\lsim2\\times10^{-7}$ eV$^2$. In the mass ranges $\\Delta m^2\\gsim 1.5\\times10^{-5}$ eV$^2$ and $4\\times10^{-10}$ eV$^2\\lsim\\Delta m^2\\lsim2\\times10^{-7}$ eV$^2$ the full interval $|\\epsilon|<0.3$ is allowed within 4$\\sigma$. We suggest ways to measure $\\epsilon$ in future experiments. The observable that is most sensitive to $\\epsilon$ is the rate [NC]/[CC] in combination with the Day-Night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is $\\Delta \\epsilon\\sim 0.07$. We also...

  4. Expectation Propagation

    OpenAIRE

    Raymond, Jack; Manoel, Andre; Opper, Manfred

    2014-01-01

    Variational inference is a powerful concept that underlies many iterative approximation algorithms; expectation propagation, mean-field methods and belief propagations were all central themes at the school that can be perceived from this unifying framework. The lectures of Manfred Opper introduce the archetypal example of Expectation Propagation, before establishing the connection with the other approximation methods. Corrections by expansion about the expectation propagation are then explain...

  5. Evolutionary Expectations

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover, they are...... cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary...

  6. On-line Bayesian System Identification

    OpenAIRE

    Romeres, Diego; Prando, Giulia; Pillonetto, Gianluigi; Chiuso, Alessandro

    2016-01-01

    We consider an on-line system identification setting, in which new data become available at given time steps. In order to meet real-time estimation requirements, we propose a tailored Bayesian system identification procedure, in which the hyper-parameters are still updated through Marginal Likelihood maximization, but after only one iteration of a suitable iterative optimization algorithm. Both gradient methods and the EM algorithm are considered for the Marginal Likelihood optimization. We c...

  7. Unequal Expectations

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    stratification, I argue that students facing significant educational transitions form their educational expectations by taking into account the foreseeable, yet inherently uncertain, consequences of potential educational pathways. This process of expectation formation, I posit, involves evaluations of the...... relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...

  8. Maximization, learning, and economic behavior.

    Science.gov (United States)

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  9. The Bayesian Bootstrap

    OpenAIRE

    Rubin, Donald B.

    1981-01-01

    The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...

  10. Shattered expectations

    DEFF Research Database (Denmark)

    Hall, Elisabeth O C; Aagaard, Hanne; Larsen, Jette Schilling

    2008-01-01

    was conducted using Noblit and Hare’s methodological approach. Results: The metasynthesis shows that confidence in breastfeeding is shaped by shattered expectations and is affected on an immediate level by mothers’ expectations, the network and the breastfeeding experts and on a discourse level by the...... mothers who do not breastfeed and by organising knowledge about breastfeeding in a specific way. Conclusion: The individual mother is responsible for the success of breastfeeding and the discourses are hiding that general perceptions and descriptions of breastfeeding undermines the mothers´ confidence in...... breastfeeding and leads to shattered expectations....

  11. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97. ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant ostatní: GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  12. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  13. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  14. Exploiting correlation and budget constraints in Bayesian multi-armed bandit optimization

    OpenAIRE

    Hoffman, Matthew W.; Shahriari, Bobak; De Freitas, Nando

    2013-01-01

    We address the problem of finding the maximizer of a nonlinear smooth function, that can only be evaluated point-wise, subject to constraints on the number of permitted function evaluations. This problem is also known as fixed-budget best arm identification in the multi-armed bandit literature. We introduce a Bayesian approach for this problem and show that it empirically outperforms both the existing frequentist counterpart and other Bayesian optimization methods. The Bayesian approach place...

  15. Life Expectancy

    OpenAIRE

    Zakrzewski, Sonia

    2015-01-01

    We give simple upper and lower bounds on life expectancy. In a life-table population, if e(0) is the life expectancy at birth, M is the median length of life, and e(M) is the expected remaining life at age M, then (M+e(M))/2≤e(0)≤M+e(M)/2. In general, for any age x, if e(x) is the expected remaining life at age x, and ℓ(x) is the fraction of a cohort surviving to age x at least, then (x+e(x))≤l(x)≤e(0)≤x+l(x)∙e(x). For any two ages 0≤w≤x≤ω, (x-w+e(x))∙ℓ(x)/ℓ(w)≤e(w)≤x-w+e(x)∙ℓ(x)/ℓ(w) . These...

  16. Expectation Formation

    OpenAIRE

    Basit Zafar; Theresa Kuchler

    2015-01-01

    We use novel survey panel data to estimate how personal experiences affect expectations about aggregate economic outcomes in housing and labor markets. We exploit cross-sectional and time series variation in differences in locally experienced house prices to show that respondents systematically extrapolate from personally experienced home prices when asked for their expectations about US house price development. In addition, higher volatility of locally experienced house prices causes respond...

  17. Growth Expectation

    OpenAIRE

    Ippei Fujiwara

    2008-01-01

    For a long time, changes in expectations about the future have been thought to be significant sources of economic fluctuations, as argued by Pigou (1926). Although creating such an expectation-driven cycle (the Pigou cycle) in equilibrium business cycle models was considered to be a difficult challenge, as pointed out by Barro and King (1984), recently, several researchers have succeeded in producing the Pigou cycle by balancing the tension between the wealth effect and the substitution effec...

  18. On Maximal Injectivity

    Institute of Scientific and Technical Information of China (English)

    Ming Yi WANG; Guo ZHAO

    2005-01-01

    A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.

  19. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  20. The Minimum Expectation Selection Problem

    OpenAIRE

    Eppstein, David; Lueker, George

    2001-01-01

    We define the min-min expectation selection problem (resp. max-min expectation selection problem) to be that of selecting k out of n given discrete probability distributions, to minimize (resp. maximize) the expectation of the minimum value resulting when independent random variables are drawn from the selected distributions. We assume each distribution has finitely many atoms. Let d be the number of distinct values in the support of the distributions. We show that if d is a constant greater ...

  1. Bayesian second law of thermodynamics

    Science.gov (United States)

    Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason

    2016-08-01

    We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.

  2. Quantum Inference on Bayesian Networks

    Science.gov (United States)

    Yoder, Theodore; Low, Guang Hao; Chuang, Isaac

    2014-03-01

    Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.

  3. Asymptotics of robust utility maximization

    CERN Document Server

    Knispel, Thomas

    2012-01-01

    For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.

  4. On Fuzzy Bayesian Inference

    OpenAIRE

    Frühwirth-Schnatter, Sylvia

    1990-01-01

    In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)

  5. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  6. On maximal curves

    CERN Document Server

    Fuhrmann, R; Torres, F; Fuhrmann, Rainer; Garcia, Arnaldo; Torres, Fernando

    1996-01-01

    We study arithmetical and geometrical properties of maximal curves, that is, curves defined over the finite field F_{q^2} whose number of F_{q^2}-rational points reaches the Hasse-Weil upper bound. Under a hypothesis on non-gaps at a rational point, we prove that maximal curves are F_{q^2}-isomorphic to y^q + y = x^m, for some $m \\in Z^+$. As a consequence we show that a maximal curve of genus g=(q-1)^2/4 is F_{q^2}-isomorphic to the curve y^q + y = x^{(q+1)/2}.

  7. Optimistic expectations

    DEFF Research Database (Denmark)

    Holm, Claus

    Young Australians’ post-school futures are uncertain, insecure and fluid in relation to working life. But if you think that this is the recipe for a next generation of depressed young Australians, you may be wrong. A new book documents that young people are characterised by optimism, but their ex...... expectations of the future differ from those of their parents....

  8. Great Expectations

    NARCIS (Netherlands)

    Dickens, Charles

    2005-01-01

    One of Dickens's most renowned and enjoyable novels, Great Expectations tells the story of Pip, an orphan boy who wishes to transcend his humble origins and finds himself unexpectedly given the opportunity to live a life of wealth and respectability. Over the course of the tale, in which Pip encount

  9. Great Expectations

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The past year marks robust economic growth for Latin America and rapid development in cooperation with China. The future in this partnership looks bright Latin America's economy is expected to grow by 4.3 percent in 2005, according to the projection of the Economic Commission for Latin America and the Caribbean. This fig-

  10. Once Again, Maxims

    Directory of Open Access Journals (Sweden)

    Rudiger Bubner

    1998-12-01

    Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.

  11. Survival versus Profit Maximization in a Dynamic Stochastic Experiment

    OpenAIRE

    Ryan Oprea

    2014-01-01

    Subjects in a laboratory experiment withdraw earnings from a cash reserve evolving according to an arithmetic Brownian motion in near‐continuous time. Aggressive withdrawal policies expose subjects to risk of bankruptcy, but the policy that maximizes expected earnings need not maximize the odds of survival. When profit maximization is consistent with high rates of survival (HS parameters), subjects adjust decisively towards the optimum. When survival and profit maximization are sharply at odd...

  12. From Wald to Savage: homo economicus becomes a Bayesian statistician.

    Science.gov (United States)

    Giocoli, Nicola

    2013-01-01

    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. An economic agent is deemed rational when she maximizes her subjective expected utility and consistently revises her beliefs according to Bayes's rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is of great historiographic importance. The story begins with Abraham Wald's behaviorist approach to statistics and culminates with Leonard J. Savage's elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. The latter's acknowledged fiasco to achieve a reinterpretation of traditional inference techniques along subjectivist and behaviorist lines raises the puzzle of how a failed project in statistics could turn into such a big success in economics. Possible answers call into play the emphasis on consistency requirements in neoclassical theory and the impact of the postwar transformation of U.S. business schools. PMID:23165740

  13. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  14. Bayesian ensemble refinement by replica simulations and reweighting

    CERN Document Server

    Hummer, Gerhard

    2015-01-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We find that the strength of the restraint scales with the number of replicas and we show that this sca...

  15. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  16. A Bayesian Game-Theoretic Approach for Distributed Resource Allocation in Fading Multiple Access Channels

    Directory of Open Access Journals (Sweden)

    Gaoning He

    2010-01-01

    Full Text Available A Bayesian game-theoretic model is developed to design and analyze the resource allocation problem in K-user fading multiple access channels (MACs, where the users are assumed to selfishly maximize their average achievable rates with incomplete information about the fading channel gains. In such a game-theoretic study, the central question is whether a Bayesian equilibrium exists, and if so, whether the network operates efficiently at the equilibrium point. We prove that there exists exactly one Bayesian equilibrium in our game. Furthermore, we study the network sum-rate maximization problem by assuming that the users coordinate according to a symmetric strategy profile. This result also serves as an upper bound for the Bayesian equilibrium. Finally, simulation results are provided to show the network efficiency at the unique Bayesian equilibrium and to compare it with other strategies.

  17. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  18. Social optimality in quantum Bayesian games

    Science.gov (United States)

    Iqbal, Azhar; Chappell, James M.; Abbott, Derek

    2015-10-01

    A significant aspect of the study of quantum strategies is the exploration of the game-theoretic solution concept of the Nash equilibrium in relation to the quantization of a game. Pareto optimality is a refinement on the set of Nash equilibria. A refinement on the set of Pareto optimal outcomes is known as social optimality in which the sum of players' payoffs is maximized. This paper analyzes social optimality in a Bayesian game that uses the setting of generalized Einstein-Podolsky-Rosen experiments for its physical implementation. We show that for the quantum Bayesian game a direct connection appears between the violation of Bell's inequality and the social optimal outcome of the game and that it attains a superior socially optimal outcome.

  19. Bayesian exploratory factor analysis

    OpenAIRE

    Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...

  20. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  1. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...

  2. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...

  3. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  4. Nonparametric Bayesian Logic

    OpenAIRE

    Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L

    2012-01-01

    The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...

  5. Bayesian default probability models

    OpenAIRE

    Andrlíková, Petra

    2014-01-01

    This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...

  6. Inverse Problems in a Bayesian Setting

    KAUST Repository

    Matthies, Hermann G.

    2016-02-13

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)—the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. We give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update in form of a filter is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and nonlinear Bayesian update in form of a filter on some examples.

  7. Maximal avalanches in the Bak-Sneppen model

    OpenAIRE

    Gillett, Alexis; Meester, Ronald; van der Wal, Peter

    2006-01-01

    We study the durations of the avalanches in the maximal avalanche decomposition of the Bak-Sneppen evolution model. We show that all the avalanches in this maximal decomposition have infinite expectation, but only `barely', in the sense that if we made the appropriate threshold a tiny amount smaller (in a certain sense), then the avalanches would have finite expectation. The first of these results is somewhat surprising, since simulations suggest finite expectations.

  8. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...

  9. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    Science.gov (United States)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  10. Learning what to expect (in visual perception)

    OpenAIRE

    Seriès, Peggy; Seitz, Aaron R.

    2013-01-01

    Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over tim...

  11. Bayesian Inference and Optimal Design in the Sparse Linear Model

    OpenAIRE

    Seeger, Matthias; Steinke, Florian; Tsuda, Koji

    2007-01-01

    The sparse linear model has seen many successful applications in Statistics, Machine Learning, and Computational Biology, such as identification of gene regulatory networks from micro-array expression data. Prior work has either approximated Bayesian inference by expensive Markov chain Monte Carlo, or replaced it by point estimation. We show how to obtain a good approximation to Bayesian analysis efficiently, using the Expectation Propagation method. We also address the problems of optimal de...

  12. Maximizing profit using recommender systems

    CERN Document Server

    Das, Aparna; Ricketts, Daniel

    2009-01-01

    Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.

  13. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  14. A Theory of Bayesian Decision Making

    OpenAIRE

    Karni, Edi

    2009-01-01

    This paper presents a complete, choice-based, axiomatic Bayesian decision theory. It introduces a new choice set consisting of information-contingent plans for choosing actions and bets and subjective expected utility model with effect-dependent utility functions and action-dependent subjective probabilities which, in conjunction with the updating of the probabilities using Bayes' rule, gives rise to a unique prior and a set of action-dependent posterior probabilities representing the decisio...

  15. Quantum-Inspired Maximizer

    Science.gov (United States)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  16. Bayesian Inference for Neighborhood Filters With Application in Denoising.

    Science.gov (United States)

    Huang, Chao-Tsung

    2015-11-01

    Range-weighted neighborhood filters are useful and popular for their edge-preserving property and simplicity, but they are originally proposed as intuitive tools. Previous works needed to connect them to other tools or models for indirect property reasoning or parameter estimation. In this paper, we introduce a unified empirical Bayesian framework to do both directly. A neighborhood noise model is proposed to reason and infer the Yaroslavsky, bilateral, and modified non-local means filters by joint maximum a posteriori and maximum likelihood estimation. Then, the essential parameter, range variance, can be estimated via model fitting to the empirical distribution of an observable chi scale mixture variable. An algorithm based on expectation-maximization and quasi-Newton optimization is devised to perform the model fitting efficiently. Finally, we apply this framework to the problem of color-image denoising. A recursive fitting and filtering scheme is proposed to improve the image quality. Extensive experiments are performed for a variety of configurations, including different kernel functions, filter types and support sizes, color channel numbers, and noise types. The results show that the proposed framework can fit noisy images well and the range variance can be estimated successfully and efficiently. PMID:26259244

  17. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  18. Thermodynamically consistent Bayesian analysis of closed biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-11-01

    Full Text Available Abstract Background Estimating the rate constants of a biochemical reaction system with known stoichiometry from noisy time series measurements of molecular concentrations is an important step for building predictive models of cellular function. Inference techniques currently available in the literature may produce rate constant values that defy necessary constraints imposed by the fundamental laws of thermodynamics. As a result, these techniques may lead to biochemical reaction systems whose concentration dynamics could not possibly occur in nature. Therefore, development of a thermodynamically consistent approach for estimating the rate constants of a biochemical reaction system is highly desirable. Results We introduce a Bayesian analysis approach for computing thermodynamically consistent estimates of the rate constants of a closed biochemical reaction system with known stoichiometry given experimental data. Our method employs an appropriately designed prior probability density function that effectively integrates fundamental biophysical and thermodynamic knowledge into the inference problem. Moreover, it takes into account experimental strategies for collecting informative observations of molecular concentrations through perturbations. The proposed method employs a maximization-expectation-maximization algorithm that provides thermodynamically feasible estimates of the rate constant values and computes appropriate measures of estimation accuracy. We demonstrate various aspects of the proposed method on synthetic data obtained by simulating a subset of a well-known model of the EGF/ERK signaling pathway, and examine its robustness under conditions that violate key assumptions. Software, coded in MATLAB®, which implements all Bayesian analysis techniques discussed in this paper, is available free of charge at http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.html. Conclusions Our approach provides an attractive statistical methodology for

  19. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  20. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  1. Bayesian Adaptive Exploration

    CERN Document Server

    Loredo, T J

    2004-01-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...

  2. Bayesian and frequentist inequality tests

    OpenAIRE

    David M. Kaplan; Zhuo, Longhao

    2016-01-01

    Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...

  3. Bayesian multiple target tracking

    CERN Document Server

    Streit, Roy L

    2013-01-01

    This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements

  4. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  5. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  6. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...

  7. Bayesian Filters in Practice

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.

    Bratislava: Slovak University of Technology in Bratislava, 2010, s. 217-222. ISBN 978-80-227-3353-3. [Robotics in Education . Bratislava (SK), 16.09.2010-17.09.2010] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot localization * bearing only beacons * Bayesian filters Subject RIV: JD - Computer Applications, Robotics

  8. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...

  9. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  10. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution as a...

  11. Bayesian Adaptive Exploration

    Science.gov (United States)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  12. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  13. HEMI: Hyperedge Majority Influence Maximization

    CERN Document Server

    Gangal, Varun; Narayanam, Ramasuri

    2016-01-01

    In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.

  14. Hierarchical Bayesian sparse image reconstruction with application to MRFM

    CERN Document Server

    Dobigeon, Nicolas; Tourneret, Jean-Yves

    2008-01-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstr...

  15. Maximal Acceleration Is Nonrotating

    CERN Document Server

    Page, D N

    1998-01-01

    In a stationary axisymmetric spacetime, the angular velocity of a stationary observer that Fermi-Walker transports its acceleration vector is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer, and conversely if the spacetime is also symmetric under reversing both t and phi together. Thus a congruence of Nonrotating Acceleration Worldlines (NAW) is equivalent to a Stationary Congruence Accelerating Locally Extremely (SCALE). These congruences are defined completely locally, unlike the case of Zero Angular Momentum Observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a Stationary Congruence Accelerating Maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulas for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry...

  16. Quantum stochastic calculus with maximal operator domains

    OpenAIRE

    Lindsay, J Martin; Attal, Stéphane

    2004-01-01

    Quantum stochastic calculus is extended in a new formulation in which its stochastic integrals achieve their natural and maximal domains. Operator adaptedness, conditional expectations and stochastic integrals are all defined simply in terms of the orthogonal projections of the time filtration of Fock space, together with sections of the adapted gradient operator. Free from exponential vector domains, our stochastic integrals may be satisfactorily composed yielding quantum Itô formulas for op...

  17. Sketching with Test Scores and Submodular Maximization

    OpenAIRE

    Sekar, Shreyas; Vojnovic, Milan; Yun, Se-Young

    2016-01-01

    We consider the problem of maximizing a submodular set function that can be expressed as the expected value of a function of an $n$-size collection of independent random variables with given prior distributions. This is a combinatorial optimization problem that arises in many applications, including the team selection problem that arises in the context of online labour platforms. We consider a class of approximation algorithms that are restricted to use some statistics of the prior distributi...

  18. Bayesian parameter estimation for effective field theories

    CERN Document Server

    Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A

    2015-01-01

    We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  19. Bayesian parameter estimation for effective field theories

    Science.gov (United States)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  20. Choquet expectation and Peng's g-expectation

    OpenAIRE

    Chen, Zengjing; Tao CHEN; Davison, Matt

    2005-01-01

    In this paper we consider two ways to generalize the mathematical expectation of a random variable, the Choquet expectation and Peng’s g-expectation. An open question has been, after making suitable restrictions to the class of random variables acted on by the Choquet expectation, for what class of expectation do these two definitions coincide? In this paper we provide a necessary and sufficient condition which proves that the only expectation which lies in both classes is the traditional lin...

  1. Rational Expectations Equilibria: Existence and Representation

    OpenAIRE

    Bhowmik, Anuj; Cao, Jiling

    2015-01-01

    In this paper, we continue to explore the equilibrium theory under ambiguity. For a model of a pure exchange and asymmetric information economy with a measure space of agents whose exogenous uncertainty is described by a complete probability space, we establish a representation theorem for a Bayesian or maximin rational expectations equilibrium allocation in terms of a state-wise Walrasian equilibrium allocation. This result also strengthens the theorems on the existence and representation of...

  2. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  3. Bayesian Magic in Asteroseismology

    Science.gov (United States)

    Kallinger, T.

    2015-09-01

    Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.

  4. Optimizing Nuclear Reaction Analysis (NRA) using Bayesian Experimental Design

    OpenAIRE

    von Toussaint, U.; Schwarz-Selinger, T.; Gori, S.

    2008-01-01

    Nuclear Reaction Analysis with ${}^{3}$He holds the promise to measure Deuterium depth profiles up to large depths. However, the extraction of the depth profile from the measured data is an ill-posed inversion problem. Here we demonstrate how Bayesian Experimental Design can be used to optimize the number of measurements as well as the measurement energies to maximize the information gain. Comparison of the inversion properties of the optimized design with standard settings reveals huge possi...

  5. A Bayesian experimental design approach to structural health monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Flynn, Eric [UCSD; Todd, Michael [UCSD

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  6. Unified Maximally Natural Supersymmetry

    CERN Document Server

    Huang, Junwu

    2016-01-01

    Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...

  7. Bayesian Nonparametric Graph Clustering

    OpenAIRE

    Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran

    2015-01-01

    We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...

  8. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  9. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  10. Bayesian Generalized Rating Curves

    OpenAIRE

    Helgi Sigurðarson 1985

    2014-01-01

    A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...

  11. Heteroscedastic Treed Bayesian Optimisation

    OpenAIRE

    Assael, John-Alexander M.; Wang, Ziyu; Shahriari, Bobak; De Freitas, Nando

    2014-01-01

    Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stat...

  12. Efficient Bayesian Phase Estimation

    Science.gov (United States)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  13. Bayesian Word Sense Induction

    OpenAIRE

    Brody, Samuel; Lapata, Mirella

    2009-01-01

    Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...

  14. Bayesian Neural Word Embedding

    OpenAIRE

    Barkan, Oren

    2016-01-01

    Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-gram (SG) with negative sampling, known also as Word2Vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm that can be beneficial to general item similarity tasks as well. The algorithm relies on a Variational Bayes solution for the SG objective and a detailed step by ...

  15. Learning Functions and Approximate Bayesian Computation Design: ABCD

    Directory of Open Access Journals (Sweden)

    Markus Hainy

    2014-08-01

    Full Text Available A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.

  16. Bayesian Attractor Learning

    Science.gov (United States)

    Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory

    2016-04-01

    Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.

  17. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  18. Unique inclusions of maximal C-clones in maximal clones

    OpenAIRE

    Behrisch, Mike; Vargas-García, Edith

    2014-01-01

    $\\mathit{C}$-clones are polymorphism sets of so-called clausal relations, a special type of relations on a finite domain, which first appeared in connection with constraint satisfaction problems in [Creignou et al. 2008]. We completely describe the relationship w.r.t. set inclusion between maximal $\\mathit{C}$-clones and maximal clones. As a main result we obtain that for every maximal $\\mathit{C}$-clone there exists exactly one maximal clone in which it is contained. A precise description of...

  19. Unbounded Bayesian Optimization via Regularization

    OpenAIRE

    Shahriari, Bobak; Bouchard-Côté, Alexandre; De Freitas, Nando

    2015-01-01

    Bayesian optimization has recently emerged as a popular and efficient tool for global optimization and hyperparameter tuning. Currently, the established Bayesian optimization practice requires a user-defined bounding box which is assumed to contain the optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such bounds. In this work we modify the standard Bayesian optimization framework in a principled way to allow automatic resizing of t...

  20. Bayesian optimization for materials design

    OpenAIRE

    Frazier, Peter I.; Wang, Jialei

    2015-01-01

    We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...

  1. PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

    CERN Document Server

    Seldin, Yevgeny; Shawe-Taylor, John; Peters, Jan; Auer, Peter

    2011-01-01

    We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many ...

  2. Optimal design under uncertainty of a passive defense structure against snow avalanches: from a general Bayesian framework to a simple analytical model

    Directory of Open Access Journals (Sweden)

    N. Eckert

    2008-10-01

    Full Text Available For snow avalanches, passive defense structures are generally designed by considering high return period events. In this paper, taking inspiration from other natural hazards, an alternative method based on the maximization of the economic benefit of the defense structure is proposed. A general Bayesian framework is described first. Special attention is given to the problem of taking the poor local information into account in the decision-making process. Therefore, simplifying assumptions are made. The avalanche hazard is represented by a Peak Over Threshold (POT model. The influence of the dam is quantified in terms of runout distance reduction with a simple relation derived from small-scale experiments using granular media. The costs corresponding to dam construction and the damage to the element at risk are roughly evaluated for each dam height-hazard value pair, with damage evaluation corresponding to the maximal expected loss. Both the classical and the Bayesian risk functions can then be computed analytically. The results are illustrated with a case study from the French avalanche database. A sensitivity analysis is performed and modelling assumptions are discussed in addition to possible further developments.

  3. Bayesian Network Based XP Process Modelling

    Directory of Open Access Journals (Sweden)

    Mohamed Abouelela

    2010-07-01

    Full Text Available A Bayesian Network based mathematical model has been used for modelling Extreme Programmingsoftware development process. The model is capable of predicting the expected finish time and theexpected defect rate for each XP release. Therefore, it can be used to determine the success/failure of anyXP Project. The model takes into account the effect of three XP practices, namely: Pair Programming,Test Driven Development and Onsite Customer practices. The model’s predictions were validated againsttwo case studies. Results show the precision of our model especially in predicting the project finish time.

  4. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  5. Decentralized Distributed Bayesian Estimation

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf

  6. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  7. Computationally efficient Bayesian tracking

    Science.gov (United States)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  8. Improved iterative Bayesian unfolding

    CERN Document Server

    D'Agostini, G

    2010-01-01

    This paper reviews the basic ideas behind a Bayesian unfolding published some years ago and improves their implementation. In particular, uncertainties are now treated at all levels by probability density functions and their propagation is performed by Monte Carlo integration. Thus, small numbers are better handled and the final uncertainty does not rely on the assumption of normality. Theoretical and practical issues concerning the iterative use of the algorithm are also discussed. The new program, implemented in the R language, is freely available, together with sample scripts to play with toy models.

  9. Bayesian inference tools for inverse problems

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2013-08-01

    In this paper, first the basics of Bayesian inference with a parametric model of the data is presented. Then, the needed extensions are given when dealing with inverse problems and in particular the linear models such as Deconvolution or image reconstruction in Computed Tomography (CT). The main point to discuss then is the prior modeling of signals and images. A classification of these priors is presented, first in separable and Markovien models and then in simple or hierarchical with hidden variables. For practical applications, we need also to consider the estimation of the hyper parameters. Finally, we see that we have to infer simultaneously on the unknowns, the hidden variables and the hyper parameters. Very often, the expression of this joint posterior law is too complex to be handled directly. Indeed, rarely we can obtain analytical solutions to any point estimators such the Maximum A posteriori (MAP) or Posterior Mean (PM). Three main tools are then can be used: Laplace approximation (LAP), Markov Chain Monte Carlo (MCMC) and Bayesian Variational Approximations (BVA). To illustrate all these aspects, we will consider a deconvolution problem where we know that the input signal is sparse and propose to use a Student-t prior for that. Then, to handle the Bayesian computations with this model, we use the property of Student-t which is modelling it via an infinite mixture of Gaussians, introducing thus hidden variables which are the variances. Then, the expression of the joint posterior of the input signal samples, the hidden variables (which are here the inverse variances of those samples) and the hyper-parameters of the problem (for example the variance of the noise) is given. From this point, we will present the joint maximization by alternate optimization and the three possible approximation methods. Finally, the proposed methodology is applied in different applications such as mass spectrometry, spectrum estimation of quasi periodic biological signals and

  10. Approximate maximizers of intricacy functionals

    CERN Document Server

    Buzzi, Jerome

    2009-01-01

    G. Edelman, O. Sporns, and G. Tononi introduced in theoretical biology the neural complexity of a family of random variables. This functional is a special case of intricacy, i.e., an average of the mutual information of subsystems whose weights have good mathematical properties. Moreover, its maximum value grows at a definite speed with the size of the system. In this work, we compute exactly this speed of growth by building "approximate maximizers" subject to an entropy condition. These approximate maximizers work simultaneously for all intricacies. We also establish some properties of arbitrary approximate maximizers, in particular the existence of a threshold in the size of subsystems of approximate maximizers: most smaller subsystems are almost equidistributed, most larger subsystems determine the full system. The main ideas are a random construction of almost maximizers with a high statistical symmetry and the consideration of entropy profiles, i.e., the average entropies of sub-systems of a given size. ...

  11. A Bayesian framework for active artificial perception.

    Science.gov (United States)

    Ferreira, João Filipe; Lobo, Jorge; Bessière, Pierre; Castelo-Branco, Miguel; Dias, Jorge

    2013-04-01

    In this paper, we present a Bayesian framework for the active multimodal perception of 3-D structure and motion. The design of this framework finds its inspiration in the role of the dorsal perceptual pathway of the human brain. Its composing models build upon a common egocentric spatial configuration that is naturally fitting for the integration of readings from multiple sensors using a Bayesian approach. In the process, we will contribute with efficient and robust probabilistic solutions for cyclopean geometry-based stereovision and auditory perception based only on binaural cues, modeled using a consistent formalization that allows their hierarchical use as building blocks for the multimodal sensor fusion framework. We will explicitly or implicitly address the most important challenges of sensor fusion using this framework, for vision, audition, and vestibular sensing. Moreover, interaction and navigation require maximal awareness of spatial surroundings, which, in turn, is obtained through active attentional and behavioral exploration of the environment. The computational models described in this paper will support the construction of a simultaneously flexible and powerful robotic implementation of multimodal active perception to be used in real-world applications, such as human-machine interaction or mobile robot navigation. PMID:23014760

  12. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  13. Adaptive Dynamic Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  14. Bayesian analysis toolkit - BAT

    International Nuclear Information System (INIS)

    Statistical treatment of data is an essential part of any data analysis and interpretation. Different statistical methods and approaches can be used, however the implementation of these approaches is complicated and at times inefficient. The Bayesian analysis toolkit (BAT) is a software package developed in C++ framework that facilitates the statistical analysis of the data using Bayesian theorem. The tool evaluates the posterior probability distributions for models and their parameters using Markov Chain Monte Carlo which in turn provide straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as simulated annealing, allow extraction of the global mode of the posterior. BAT sets a well-tested environment for flexible model definition and also includes a set of predefined models for standard statistical problems. The package is interfaced to other software packages commonly used in high energy physics, such as ROOT, Minuit, RooStats and CUBA. We present a general overview of BAT and its algorithms. A few physics examples are shown to introduce the spectrum of its applications. In addition, new developments and features are summarized.

  15. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    OpenAIRE

    Desak Putu Eka Pratiwi

    2015-01-01

    Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. Th...

  16. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    Directory of Open Access Journals (Sweden)

    Wei Ji Ma

    Full Text Available Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness, one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  17. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  18. Implications of maximal Jarlskog invariant and maximal CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Jauregui, E. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Universidad Nacional Autonoma de Mexico (UNAM), Cuernavaca (Mexico). Inst. de Fisica

    2001-04-01

    We argue here why CP violating phase {phi} in the quark mixing matrix is maximal, that is, {phi}=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase {phi}. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase {phi} takes the value {phi}=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase {phi} is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  19. Principles of maximally classical and maximally realistic quantum mechanics

    Indian Academy of Sciences (India)

    S M Roy

    2002-08-01

    Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative definition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.

  20. Implications of maximal Jarlskog invariant and maximal CP violation

    CERN Document Server

    Rodríguez-Jáuregui, E

    2001-01-01

    We argue here why CP violating phase Phi in the quark mixing matrix is maximal, that is, Phi=90 degrees. In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Phi. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Phi takes the value Phi=90 degrees. This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Phi is maximal in order to let nature treat democratically all of the quark mixing matrix moduli.

  1. Goal! Profit maximization and win maximization in football leagues

    OpenAIRE

    Pedro Garcia-del-Barrio; Stefan Szymanski

    2006-01-01

    In this paper we estimate the best responses of football clubs to the choices of other clubs in Spanish and English leagues over the period 1994-2004. We find that choices are more closely approximated by win maximization than by profit maximization in both the short term and the long term. We examine club characteristics that might explain variations in choices between Spanish clubs.

  2. Book review: Bayesian analysis for population ecology

    Science.gov (United States)

    Link, William A.

    2011-01-01

    Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)

  3. Maximal superintegrability of Benenti systems

    International Nuclear Information System (INIS)

    For a class of Hamiltonian systems, naturally arising in the modern theory of separation of variables, we establish their maximal superintegrability by explicitly constructing the additional integrals of motion. (letter to the editor)

  4. Maximal flow in possibilistic networks

    International Nuclear Information System (INIS)

    In this study the determination of maximal flow in possibilistic networks with multiple sinks is studied. To do so, a time-continuous optimization problem is provided and an effective approach to solve it is introduced.

  5. On the maximal diphoton width

    Science.gov (United States)

    Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo

    2016-03-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  6. Penrose limits and maximal supersymmetry

    International Nuclear Information System (INIS)

    We show that the maximally supersymmetric pp-wave of IIB superstring and M-theories can be obtained as a Penrose limit of the supersymmetric AdSxS solutions. In addition, we find that in a certain large tension limit, the geometry seen by a brane probe in an AdSxS background is either Minkowski space or a maximally supersymmetric pp-wave. (letter to the editor)

  7. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  8. Maximal Flavor Violation in Super-GUTs

    CERN Document Server

    Ellis, John; Velasco-Sevilla, Liliana

    2016-01-01

    We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses $m_0$ specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses $m_{1/2}$, as is expected in no-scale models, the dominant effects of renormalization between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to $m_{1/2}$ and generation-independent. In this case, the input scalar masses $m_0$ may violate flavour maximally, a scenario we call MaxFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.

  9. Stock Price Booms and Expected Capital Gains

    OpenAIRE

    Adam, Klaus; Beutel, Johannes; Marcet, Albert

    2014-01-01

    The booms and busts in U.S. stock prices over the post-war period can to a large extent be explained by fluctuations in investors' subjective capital gains expectations. Survey measures of these expectations display excessive optimism at market peaks and excessive pessimism at market troughs. Formally incorporating subjective price beliefs into an otherwise standard asset pricing model with utility maximizing investors, we show how subjective belief dynamics can temporarily delink stock price...

  10. Maximization of Tsallis entropy in the combinatorial formulation

    International Nuclear Information System (INIS)

    This paper presents the mathematical reformulation for maximization of Tsallis entropy Sq in the combinatorial sense. More concretely, we generalize the original derivation of Maxwell-Boltzmann distribution law to Tsallis statistics by means of the corresponding generalized multinomial coefficient. Our results reveal that maximization of S2-q under the usual expectation or Sq under q-average using the escort expectation are naturally derived from the combinatorial formulations for Tsallis statistics with respective combinatorial dualities, that is, one for additive duality and the other for multiplicative duality.

  11. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  12. Competitive equilibrium hyperinflation under rational expectations

    OpenAIRE

    Sallum, Elvia Mureb; Barbosa, Fernando de Holanda; Cunha, Alexandre Barros da

    2005-01-01

    This paper shows that a competitive equilibrium model, where a representative agent maximizes welfare, expectations are rational and markets are in equilibrium can account for several hyperinflation stylized facts. The theory is built by combining two hypotheses, namely, a fiscal crisis that requires printing money to finance an increasing public deficit and a predicted change in an unsustainable fiscal regime.

  13. Continuous expected utility for arbitrary state spaces

    NARCIS (Netherlands)

    P.P. Wakker

    1985-01-01

    Subjective expected utility maximization with continuous utility is characterized, extending the result of Wakker (1984, Journal of Mathematical Psychology) to infinite state spaces. In Savage (1954, The Foundations of Statistics) the main restriction, P6, requires structure for the state space, e.g

  14. A localization model to localize multiple sources using Bayesian inference

    Science.gov (United States)

    Dunham, Joshua Rolv

    Accurate localization of a sound source in a room setting is important in both psychoacoustics and architectural acoustics. Binaural models have been proposed to explain how the brain processes and utilizes the interaural time differences (ITDs) and interaural level differences (ILDs) of sound waves arriving at the ears of a listener in determining source location. Recent work shows that applying Bayesian methods to this problem is proving fruitful. In this thesis, pink noise samples are convolved with head-related transfer functions (HRTFs) and compared to combinations of one and two anechoic speech signals convolved with different HRTFs or binaural room impulse responses (BRIRs) to simulate room positions. Through exhaustive calculation of Bayesian posterior probabilities and using a maximal likelihood approach, model selection will determine the number of sources present, and parameter estimation will result in azimuthal direction of the source(s).

  15. Bayesian Calibration of Generalized Pools of Predictive Distributions

    Directory of Open Access Journals (Sweden)

    Roberto Casarin

    2016-03-01

    Full Text Available Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.

  16. Bayesian image reconstruction for emission tomography based on median root prior

    International Nuclear Information System (INIS)

    The aim of the present study was to investigate a new type of Bayesian one-step late reconstruction method which utilizes a median root prior (MRP). The method favours images which have locally monotonous radioactivity concentrations. The new reconstruction algorithm was applied to ideal simulated data, phantom data and some patient examinations with PET. The same projection data were reconstructed with filtered back-projection (FBP) and maximum likelihood-expectation maximization (ML-EM) methods for comparison. The MRP method provided good-quality images with a similar resolution to the FBP method with a ramp filter, and at the same time the noise properties were as good as with Hann-filtered FBP images. The typical artefacts seen in FBP reconstructed images outside of the object were completely removed, as was the grainy noise inside the object. Quantitativley, the resulting average regional radioactivity concentrations in a large region of interest in images produced by the MRP method corresponded to the FBP and ML-EM results but at the pixel by pixel level the MRP method proved to be the most accurate of the tested methods. In contrast to other iterative reconstruction methods, e.g. ML-EM, the MRP method was not sensitive to the number of iterations nor to the adjustment of reconstruction parameters. Only the Bayesian parameter β had to be set. The proposed MRP method is much more simple to calculate than the methods described previously, both with regard to the parameter settings and in terms of general use. The new MRP reconstruction method was shown to produce high-quality quantitative emission images with only one parameter setting in addition to the number of iterations. (orig.)

  17. Bayesian Methods and Universal Darwinism

    OpenAIRE

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...

  18. Portfolio Allocation for Bayesian Optimization

    OpenAIRE

    Brochu, Eric; Hoffman, Matthew W.; De Freitas, Nando

    2010-01-01

    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several differen...

  19. Neuronanatomy, neurology and Bayesian networks

    OpenAIRE

    Bielza Lozoya, Maria Concepcion

    2014-01-01

    Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...

  20. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  1. Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap

    OpenAIRE

    Dale Poirier

    2008-01-01

    This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.

  2. Distinguishability of maximally entangled states

    International Nuclear Information System (INIS)

    In 2x2, more than two orthogonal Bell states with a single copy can never be discriminated with certainty if only local operations and classical communication (LOCC) are allowed. We show here that more than d numbers of pairwise orthogonal maximally entangled states in dxd, which are in canonical form, used by Bennett et al. [Phys. Rev. Lett. 70, 1895 (1993)], can never be discriminated with certainty by LOCC, when single copies of the states are provided. Interestingly we show here that all orthogonal maximally entangled states, which are in canonical form, can be discriminated with certainty by LOCC if and only if two copies of each of the states are provided. We provide here a conjecture regarding the highly nontrivial problem of local distinguishability of any d or fewer numbers of pairwise orthogonal maximally entangled states in dxd (in the single copy case)

  3. Bayesian Networks as a Decision Tool for O&M of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Nielsen, Jannie Jessen; Sørensen, John Dalsgaard

    2010-01-01

    Costs to operation and maintenance (O&M) of offshore wind turbines are large. This paper presents how influence diagrams can be used to assist in rational decision making for O&M. An influence diagram is a graphical representation of a decision tree based on Bayesian Networks. Bayesian Networks...... offer efficient Bayesian updating of a damage model when imperfect information from inspections/monitoring is available. The extension to an influence diagram offers the calculation of expected utilities for decision alternatives, and can be used to find the optimal strategy among different alternatives...

  4. Expectations, Life Expectancy, and Economic Behavior

    OpenAIRE

    Hamermesh, Daniel S.

    1982-01-01

    Unlike price expectations, which are central to macroeconomic theory and have been examined extensively using survey data, formation of individuals' horizons, which are central to the theory of life-cycle behavior, have been completely neglected. This is especially surprising since life expectancy of adults has increased especially rapidly in Western countries in the past ten years. This study presents the results of analyzing responses by two groups--economists and a random sample--to a ques...

  5. Inflation Expectations in Nepal

    OpenAIRE

    T.P. Koirala Ph.D.

    2008-01-01

    There is a significant positive relati onship between inflation and inflation expectations in Nepal, where the latter variable has been generated under Adaptive Expectation Hypothesis (AEH). Using 33 annual observations of actual inflation from 1973 to 2006, one percent increase in inflation expectations has 0.83 percent impact on contemporaneous inflation. The forecastability of inflation expectations on current inflation is higher than that of the expected inflation proxied by one-period la...

  6. Nonparametric Bayesian Classification

    CERN Document Server

    Coram, M A

    2002-01-01

    A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...

  7. BAT - Bayesian Analysis Toolkit

    International Nuclear Information System (INIS)

    One of the most vital steps in any data analysis is the statistical analysis and comparison with the prediction of a theoretical model. The many uncertainties associated with the theoretical model and the observed data require a robust statistical analysis tool. The Bayesian Analysis Toolkit (BAT) is a powerful statistical analysis software package based on Bayes' Theorem, developed to evaluate the posterior probability distribution for models and their parameters. It implements Markov Chain Monte Carlo to get the full posterior probability distribution that in turn provides a straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as Simulated Annealing, allow to evaluate the global mode of the posterior. BAT is developed in C++ and allows for a flexible definition of models. A set of predefined models covering standard statistical cases are also included in BAT. It has been interfaced to other commonly used software packages such as ROOT, Minuit, RooStats and CUBA. An overview of the software and its algorithms is provided along with several physics examples to cover a range of applications of this statistical tool. Future plans, new features and recent developments are briefly discussed.

  8. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...... grid nodes and grid-node artifacts and the method accommodates a wide range of grid distortions including: large-scale warping, varying row/column spacing, as well as nonrigid random fluctuations of the grid nodes. The methodology is demonstrated in two case studies concerning (1) localization of DNA...

  9. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  10. Bayesian estimation of isotopic age differences

    Energy Technology Data Exchange (ETDEWEB)

    Curl, R.L.

    1988-08-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of (4.4, 43.7) ka.

  11. Joint essential maximal numerical range

    International Nuclear Information System (INIS)

    The notion of essential maximal numerical range of a single operator was introduced by Fong and was further studied by Khan. The aim of this paper is to generalize this notion to n-tuple of operators and prove certain results analogous to the single operator case. (author). 7 refs

  12. Case studies in Bayesian microbial risk assessments

    Directory of Open Access Journals (Sweden)

    Turner Joanne

    2009-12-01

    Full Text Available Abstract Background The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. Methods We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs. Results We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5. The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11. In the second

  13. International Portfolio Diversification with Generalized Expected Utility Preferences

    OpenAIRE

    Joshua Aizenman

    1997-01-01

    This paper revisits the Home Bias Puzzle -- the relatively low interna- tional diversification of portfolios. We suggest that part of the diversifi- cation puzzle may be due to reliance on the conventional CAPM model as the benchmark predicting patterns of diversification. We compare the asset diver- sification patterns of agents who maximize a generalized expected utility (GEU) to the diversification of agents who maximize the conventional expected utility (EU). Specifically, we derive the p...

  14. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    Science.gov (United States)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M.

    2013-08-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  15. Bayesian individualization via sampling-based methods.

    Science.gov (United States)

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  16. Safety culture in Bayesian and legal contexts

    International Nuclear Information System (INIS)

    While contemplating the similarities between the law of torts and concepts of safety, the author realized that there was a close correspondence between the law of negligence and the way safety ought to be generally defined. This definition of safety is provided herein. A safety culture must have an adequate definition of safety in order to function most effectively. This paper provides a practical definition of safety that answers the question 'How safe is safe enough? The development rests on two bases: the subjectivistic-Bayesian definition of probability and certain legal definitions primarily from the tort law of negligence. The development also leads to the conclusion that one cannot generally expect greater specificity in determining how safe is safe enough than one finds in the legal definition of liability under the tort of negligence. It then follows that some of the public's aversion to complex technical undertakings is rooted in its typically intuitive and vague notions concerning safety

  17. Sublinear expectation linear regression

    OpenAIRE

    Lin, Lu; Shi, Yufeng; Wang, Xin; Yang, Shuzhen

    2013-01-01

    Nonlinear expectation, including sublinear expectation as its special case, is a new and original framework of probability theory and has potential applications in some scientific fields, especially in finance risk measure and management. Under the nonlinear expectation framework, however, the related statistical models and statistical inferences have not yet been well established. The goal of this paper is to construct the sublinear expectation regression and investigate its statistical infe...

  18. Promises and Expectations

    OpenAIRE

    Florian Ederer; Alexander Stremitzer

    2014-01-01

    We investigate why people keep their promises in the absence of external enforcement mechanisms and reputational effects. In a controlled laboratory experiment we show that exogenous variation of second-order expectations (promisors' expectations about promisees' expectations that the promise will be kept) leads to a significant change in promisor behavior. We provide clean evidence that a promisor's aversion to disappointing a promisee's expectation leads her to keep her promise. We propose ...

  19. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  20. Semi-Supervised Bayesian Classification of Materials with Impact-Echo Signals

    Directory of Open Access Journals (Sweden)

    Jorge Igual

    2015-05-01

    Full Text Available The detection and identification of internal defects in a material require the use of some technology that translates the hidden interior damages into observable signals with different signature-defect correspondences. We apply impact-echo techniques for this purpose. The materials are classified according to their defective status (homogeneous, one defect or multiple defects and kind of defect (hole or crack, passing through or not. Every specimen is impacted by a hammer, and the spectrum of the propagated wave is recorded. This spectrum is the input data to a Bayesian classifier that is based on the modeling of the conditional probabilities with a mixture of Gaussians. The parameters of the Gaussian mixtures and the class probabilities are estimated using an extended expectation-maximization algorithm. The advantage of our proposal is that it is flexible, since it obtains good results for a wide range of models even under little supervision; e.g., it obtains a harmonic average of precision and recall value of 92.38% given only a 10% supervision ratio. We test the method with real specimens made of aluminum alloy. The results show that the algorithm works very well. This technique could be applied in many industrial problems, such as the optimization of the marble cutting process.

  1. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  2. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  3. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  4. Do New York Dairy Farmers Maximize Profits?

    OpenAIRE

    Tauer, Loren W.

    1994-01-01

    Varian's Weak Axiom of Profit Maximization was used to determine whether each of 49 New York dairy farms displayed behavior consistent with profit maximization. The results indicate that most were only moderately successful in maximizing profits. Characteristics of the farms did not strongly differentiate those that were better at maximizing profits.

  5. Note on maximal distance separable codes

    Institute of Scientific and Technical Information of China (English)

    YANG; Jian-sheng; WANG; De-xiu; JIN; Qing-fang

    2009-01-01

    In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.

  6. Experimental design for efficient identification of gene regulatory networks using sparse Bayesian models

    Directory of Open Access Journals (Sweden)

    Tsuda Koji

    2007-11-01

    Full Text Available Abstract Background Identifying large gene regulatory networks is an important task, while the acquisition of data through perturbation experiments (e.g., gene switches, RNAi, heterozygotes is expensive. It is thus desirable to use an identification method that effectively incorporates available prior knowledge – such as sparse connectivity – and that allows to design experiments such that maximal information is gained from each one. Results Our main contributions are twofold: a method for consistent inference of network structure is provided, incorporating prior knowledge about sparse connectivity. The algorithm is time efficient and robust to violations of model assumptions. Moreover, we show how to use it for optimal experimental design, reducing the number of required experiments substantially. We employ sparse linear models, and show how to perform full Bayesian inference for these. We not only estimate a single maximum likelihood network, but compute a posterior distribution over networks, using a novel variant of the expectation propagation method. The representation of uncertainty enables us to do effective experimental design in a standard statistical setting: experiments are selected such that the experiments are maximally informative. Conclusion Few methods have addressed the design issue so far. Compared to the most well-known one, our method is more transparent, and is shown to perform qualitatively superior. In the former, hard and unrealistic constraints have to be placed on the network structure for mere computational tractability, while such are not required in our method. We demonstrate reconstruction and optimal experimental design capabilities on tasks generated from realistic non-linear network simulators. The methods described in the paper are available as a Matlab package at http://www.kyb.tuebingen.mpg.de/sparselinearmodel.

  7. Bayesian test and Kuhn's paradigm

    Institute of Scientific and Technical Information of China (English)

    Chen Xiaoping

    2006-01-01

    Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.

  8. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  9. Natural selection maximizes Fisher information

    OpenAIRE

    Frank, Steven A.

    2009-01-01

    In biology, information flows from the environment to the genome by the process of natural selection. But it has not been clear precisely what sort of information metric properly describes natural selection. Here, I show that Fisher information arises as the intrinsic metric of natural selection and evolutionary dynamics. Maximizing the amount of Fisher information about the environment captured by the population leads to Fisher's fundamental theorem of natural selection, the most profound st...

  10. Supply Theory sans Profit Maximization

    OpenAIRE

    Dasgupta, Indraneel

    2009-01-01

    We utilize the analytical construct of a stochastic supply function to provide an aggregate representation of a finite collection of standard deterministic supply functions. We introduce a consistency postulate for a stochastic supply function that may be satisfied even if no underlying deterministic supply function is rationalizable in terms of profit maximization. Our consistency postulate is nonetheless equivalent to a stochastic expansion of supply inequality, which summarizes the predict...

  11. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  12. Approximate maximizers of intricacy functionals

    OpenAIRE

    Buzzi, Jerome; Zambotti, Lorenzo

    2009-01-01

    G. Edelman, O. Sporns, and G. Tononi introduced in theoretical biology the neural complexity of a family of random variables. This functional is a special case of intricacy, i.e., an average of the mutual information of subsystems whose weights have good mathematical properties. Moreover, its maximum value grows at a definite speed with the size of the system. In this work, we compute exactly this speed of growth by building "approximate maximizers" subject to an entropy condition. These appr...

  13. Profit maximization and supermodular technology

    OpenAIRE

    Christopher P. Chambers; Echenique, Federico

    2009-01-01

    A dataset is a list of observed factor inputs and prices for a technology; profits and production levels are unobserved. We obtain necessary and sufficient conditions for a dataset to be consistent with profit maximization under a monotone and concave revenue based on the notion of cyclic monotonicity. Our result implies that monotonicity and concavity cannot be tested, and that one cannot decide if a firm is competitive based on factor demands. We also introduce a condition, cyclic supermodu...

  14. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  15. Bayesian networks and food security - An introduction

    NARCIS (Netherlands)

    Stein, A.

    2004-01-01

    This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup

  16. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis; Wąsowski, Andrzej

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity computat......The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show...... how to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  17. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    C. Dimitrakakis

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st

  18. Bayesian SAR imaging

    Science.gov (United States)

    Chen, Zhaofu; Tan, Xing; Xue, Ming; Li, Jian

    2010-04-01

    We introduce a maximum a posteriori (MAP) algorithm and a sparse learning via iterative minimization (SLIM) algorithm to synthetic aperture radar (SAR) imaging. Both MAP and SLIM are sparse signal recovery algorithms with excellent sidelobe suppression and high resolution properties. The former cyclically maximizes the a posteriori probability density function for a given sparsity promoting prior, while the latter cyclically minimizes a regularized least squares cost function. We show how MAP and SLIM can be adapted to the SAR imaging application and used to enhance the image quality. We evaluate the performance of MAP and SLIM using the simulated complex-valued backscattered data from a backhoe vehicle. The numerical results show that both MAP and SLIM satisfactorily suppress the sidelobes and yield higher resolution than the conventional matched filter or delay-and-sum (DAS) approach. MAP and SLIM outperform the widely used compressive sampling matching pursuit (CoSaMP) algorithm, which requires the delicate choice of user parameters. Compared with the recently developed iterative adaptive approach (IAA), MAP and SLIM are computationally more efficient, especially with the help of fast Fourier transform (FFT). Also, the a posteriori distribution given by the algorithms provides us with a basis for the analysis of the statistical properties of the SAR image pixels.

  19. Bayesian Analysis of Experimental Data

    Directory of Open Access Journals (Sweden)

    Lalmohan Bhar

    2013-10-01

    Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.

  20. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  1. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in...

  2. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  3. Bayesian Agglomerative Clustering with Coalescents

    OpenAIRE

    Teh, Yee Whye; Daumé III, Hal; Roy, Daniel

    2009-01-01

    We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.

  4. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...

  5. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  6. Householders’ Inflation Expectations

    OpenAIRE

    Andrea Brischetto; Gordon de Brouwer

    1999-01-01

    Inflation expectations have wide-reaching effects on the macroeconomy and are an important part of the transmission of monetary policy. This paper analyses the Melbourne Institute survey of householders’ inflation expectations. Householders’ average inflation expectations vary with personal characteristics. People with better access to information or more developed information-processing skills – such as professionals, those with more education, or older people – tend to have lower and more a...

  7. Reasoning about Expectation

    OpenAIRE

    Halpern, Joseph Y.; Pucella, Riccardo

    2014-01-01

    Expectation is a central notion in probability theory. The notion of expectation also makes sense for other notions of uncertainty. We introduce a propositional logic for reasoning about expectation, where the semantics depends on the underlying representation of uncertainty. We give sound and complete axiomatizations for the logic in the case that the underlying representation is (a) probability, (b) sets of probability measures, (c) belief functions, and (d) possibility measures. We show th...

  8. Expectations and Share Prices

    OpenAIRE

    Edwin J. Elton; Martin J. Gruber; Mustafa Gultekin

    1981-01-01

    It is generally believed that security prices are determined by expectations concerning firm and economic variables. Despite this belief there is very little research examining expectational data. In this paper we examine how expectations concerning earning per share effect share price. We first show that knowledge concerning analyst's forecasts of earnings per share cannot by itself lead to excess returns. Any information contained in the consensus estimate of earnings per share is already i...

  9. Bayesian data assimilation in shape registration

    KAUST Repository

    Cotter, C J

    2013-03-28

    In this paper we apply a Bayesian framework to the problem of geodesic curve matching. Given a template curve, the geodesic equations provide a mapping from initial conditions for the conjugate momentum onto topologically equivalent shapes. Here, we aim to recover the well-defined posterior distribution on the initial momentum which gives rise to observed points on the target curve; this is achieved by explicitly including a reparameterization in the formulation. Appropriate priors are chosen for the functions which together determine this field and the positions of the observation points, the initial momentum p0 and the reparameterization vector field ν, informed by regularity results about the forward model. Having done this, we illustrate how maximum likelihood estimators can be used to find regions of high posterior density, but also how we can apply recently developed Markov chain Monte Carlo methods on function spaces to characterize the whole of the posterior density. These illustrative examples also include scenarios where the posterior distribution is multimodal and irregular, leading us to the conclusion that knowledge of a state of global maximal posterior density does not always give us the whole picture, and full posterior sampling can give better quantification of likely states and the overall uncertainty inherent in the problem. © 2013 IOP Publishing Ltd.

  10. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  11. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  12. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...

  13. Blending Bayesian and frequentist methods according to the precision of prior information with an application to hypothesis testing

    CERN Document Server

    Bickel, David R

    2011-01-01

    The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null hypothesis leads the statistician to report a posterior probability of the hypothesis that is informed by both Bayesian and frequentist methodology, each weighted according how well the prior is known. Since neither the Bayesian approach nor the frequentist approach is entirely satisfactory in situations involving partial knowledge of the prior distribution, the proposed procedure reduces to a Bayesian method given complete knowledge of the prior, to a frequentist method given complete ignorance about the prior, and to a...

  14. Subjective Expected Utility with Non-Increasing Risk Aversion

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1989-01-01

    textabstractIt is shown that assumptions about risk aversion, usually studied under the pre-supposition of expected utility maximization, have a surprising extra merit at an earlier stage of the measurement work: together with the sure-thing principle, these assumptions imply subjective expected uti

  15. Unbiased Adaptive Expectation Schemes

    OpenAIRE

    Antonio Palestrini; Mauro Gallegati

    2015-01-01

    There are situations in which the old-fashioned adaptive expectation process seems to provide a good description of agents' behavior (Chow, 2011). Unfortunately, this expectation scheme may not satisfy the necessary rationality condition (unconditional mean-zero error). This paper shows how to simply fix the problem introducing a bias correction term.

  16. Maximally coherent mixed states: Complementarity between maximal coherence and mixedness

    Science.gov (United States)

    Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar

    2015-05-01

    Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.

  17. Convergence to Rational Expectations in a Stationary Linear Game.

    OpenAIRE

    Jordan, J S

    1992-01-01

    This paper describes several learning processes which converge, with probability one, to the rational expectations (Bayesian-Nash) equilibrium of a stationary linear game. The learning processes include a test for convergence to equilibrium and a method for changing the parameters of the process when nonconvergence is indicated. This self-stabilization property eliminates the need to impose stability conditions on the economic environment. Convergence to equilibrium is proved for two types of...

  18. Bayesian exploration for intelligent identification of textures

    Directory of Open Access Journals (Sweden)

    Jeremy A. Fishel

    2012-06-01

    Full Text Available In order to endow robots with humanlike abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac® we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness. Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median=5 and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other

  19. Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

    OpenAIRE

    Zhu, Weixuan

    2016-01-01

    The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...

  20. Maximizing competition with market enhancements

    International Nuclear Information System (INIS)

    This session presented highlights from 5 guest speakers who commented on ways to maximize competition with market enhancement. The moderator noted that at the four-month mark, the open electricity market in Ontario is operating reasonably well. The role of energy standards in an open electricity market were discussed with reference to FERC Orders 888, the Oasis 2A Notice of Public Rulemaking, Order 2000, and FERC's Standard Market Design. It was noted that since there is significant electricity trade between Ontario and its neighbours, particularly in the United Sates, many aspects of wholesale electricity market design rules will have to be standardized to ensure compatibility and to ensure that transactions can continue smoothly. It was emphasized that the open market will be driven by supply and demand and that the price of electricity should not be a deterrent for investors. figs

  1. Constraint Propagation as Information Maximization

    CERN Document Server

    Abdallah, A Nait

    2012-01-01

    Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.

  2. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  3. Expect Respect: Healthy Relationships

    Science.gov (United States)

    ... Listen Español Text Size Email Print Share Expect Respect: Healthy Relationships Page Content Article Body Signs of ... good about what happens when they are together. Respect You ask each other what you want to ...

  4. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  5. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion......The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept of...... condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators. This...

  6. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  7. Bayesian Seismology of the Sun

    CERN Document Server

    Gruberbauer, Michael

    2013-01-01

    We perform a Bayesian grid-based analysis of the solar l=0,1,2 and 3 p modes obtained via BiSON in order to deliver the first Bayesian asteroseismic analysis of the solar composition problem. We do not find decisive evidence to prefer either of the contending chemical compositions, although the revised solar abundances (AGSS09) are more probable in general. We do find indications for systematic problems in standard stellar evolution models, unrelated to the consequences of inadequate modelling of the outer layers on the higher-order modes. The seismic observables are best fit by solar models that are several hundred million years older than the meteoritic age of the Sun. Similarly, meteoritic age calibrated models do not adequately reproduce the observed seismic observables. Our results suggest that these problems will affect any asteroseismic inference that relies on a calibration to the Sun.

  8. Bayesian priors for transiting planets

    CERN Document Server

    Kipping, David M

    2016-01-01

    As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...

  9. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  10. Bayesian inference on proportional elections.

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  11. Bayesian inference of models and hyper-parameters for robust optic-flow estimation

    OpenAIRE

    Héas, Patrick; Herzet, Cédric; Memin, Etienne

    2012-01-01

    International audience Selecting optimal models and hyper-parameters is crucial for accurate optic-flow estimation. This paper provides a solution to the problem in a generic Bayesian framework. The method is based on a conditional model linking the image intensity function, the unknown velocity field, hyper-parameters and the prior and likelihood motion models. Inference is performed on each of the three-level of this so-defined hierarchical model by maximization of marginalized \\textit{a...

  12. Bayesian Estimation of Negative Binomial Parameters with Applications to RNA-Seq Data

    OpenAIRE

    Leon-Novelo, Luis; Claudio FUENTES; Emerson, Sarah

    2015-01-01

    RNA-Seq data characteristically exhibits large variances, which need to be appropriately accounted for in the model. We first explore the effects of this variability on the maximum likelihood estimator (MLE) of the overdispersion parameter of the negative binomial distribution, and propose instead the use an estimator obtained via maximization of the marginal likelihood in a conjugate Bayesian framework. We show, via simulation studies, that the marginal MLE can better control this variation ...

  13. Upper expectation parametric regression

    OpenAIRE

    Lin, Lu; Dong, Ping; Song, Yunquan; Zhu, Lixing

    2014-01-01

    Every observation may follow a distribution that is randomly selected in a class of distributions. It is called the distribution uncertainty. This is a fact acknowledged in some research fields such as financial risk measure. Thus, the classical expectation is not identifiable in general.In this paper, a distribution uncertainty is defined, and then an upper expectation regression is proposed, which can describe the relationship between extreme events and relevant covariates under the framewo...

  14. Expectation Traps and Discretion

    OpenAIRE

    V. V. Chari; Lawrence J. Christiano; Martin Eichenbaum

    1996-01-01

    We argue that discretionary monetary policy exposes the economy to welfare-decreasing instability. It does so by creating the potential for private expectations about the response of monetary policy to exogenous shocks to be self-fulfilling. Among the many equilibria that are possible, some have good welfare properties. But others exhibit welfare-decreasing volatility in output and employment. We refer to the latter type of equilibria as expectation traps. In effect, our paper presents a new ...

  15. Retirement Incentives and Expectations

    OpenAIRE

    Sewin Chan; Ann Huff Stevens

    2001-01-01

    This paper investigates the responsiveness of individuals' retirement expectations to forward-looking measures of pension wealth accumulations. While most of the existing literature on retirement has used cross-sectional variation to identify the effects of pension and Social Security wealth on retirement behavior, we estimate fixed-effects regressions to control for unobserved heterogeneity that might be correlated with retirement plans and wealth. As expected, we find significant effects of...

  16. Subjective Life Expectancy

    OpenAIRE

    Nicky Nicholls; Alexander Zimper

    2014-01-01

    Individuals' subjective life-expectancy, as elicited in large-scale surveys, shows underestimation of survival chances at young versus overestimation at old ages. These distorted perceptions of objective survival chances may cause young people to save too little and old people to accumulate too much wealth late in life with respect to the rational expectations benchmark model. Alternative explanations for these differences between perceived and objective survival chances include cognitive sho...

  17. Heterogeneity in Expected Longevities

    OpenAIRE

    Josep Pijoan-Mas; José-Víctor Ríos-Rull

    2012-01-01

    We develop a new methodology to compute differences in the expected longevity of individuals who are in different socioeconomic groups at age 50. We deal with two main problems associated with the standard use of life expectancy: that people’s socioeconomic characteristics evolve over time and that there is a time trend that reduces mortality over time. Using HRS data for individuals from different cohorts, we estimate a hazard model for survival with time-varying stochastic endogenous covari...

  18. Measuring Inflation Expectations

    OpenAIRE

    Olivier Armantier; Wändi Bruine de Bruin; Simon Potter; Giorgio Topa; Wilbert Van der Klaauw; Basit Zafar

    2013-01-01

    To conduct monetary policy, central banks around the world increasingly rely on measures of public inflation expectations. In this article, we review findings from an ongoing initiative at the Federal Reserve Bank of New York aimed at improving the measurement and our understanding of household inflation expectations through surveys. We discuss the importance of question wording and the usefulness of new questions to elicit an individual’s distribution of inflation beliefs. We present evidenc...

  19. Maximizing System Throughput by Cooperative Sensing in Cognitive Radio Networks

    OpenAIRE

    Li, Shuang; Zheng, Zizhan; Ekici, Eylem; Shroff, Ness

    2011-01-01

    Cognitive Radio Networks allow unlicensed users to opportunistically access the licensed spectrum without causing disruptive interference to the primary users (PUs). One of the main challenges in CRNs is the ability to detect PU transmissions. Recent works have suggested the use of secondary user (SU) cooperation over individual sensing to improve sensing accuracy. In this paper, we consider a CRN consisting of a single PU and multiple SUs to study the problem of maximizing the total expected...

  20. A Bayesian Nonparametric IRT Model

    OpenAIRE

    Karabatsos, George

    2015-01-01

    This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...

  1. Bayesian segmentation of hyperspectral images

    CERN Document Server

    Mohammadpour, Adel; Mohammad-Djafari, Ali

    2007-01-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  2. Bayesian segmentation of hyperspectral images

    Science.gov (United States)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  3. Bayesian Stable Isotope Mixing Models

    OpenAIRE

    Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard

    2012-01-01

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...

  4. Bayesian estimation of turbulent motion

    OpenAIRE

    Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni

    2013-01-01

    International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...

  5. Elements of Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  6. Skill Rating by Bayesian Inference

    OpenAIRE

    Di Fatta, Giuseppe; Haworth, Guy McCrossan; Regan, Kenneth W.

    2009-01-01

    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of ...

  7. Topics in Nonparametric Bayesian Statistics

    OpenAIRE

    2003-01-01

    The intersection set of Bayesian and nonparametric statistics was almost empty until about 1973, but now seems to be growing at a healthy rate. This chapter gives an overview of various theoretical and applied research themes inside this field, partly complementing and extending recent reviews of Dey, Müller and Sinha (1998) and Walker, Damien, Laud and Smith (1999). The intention is not to be complete or exhaustive, but rather to touch on research areas of interest, partly by example.

  8. Cover Tree Bayesian Reinforcement Learning

    OpenAIRE

    Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos

    2013-01-01

    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...

  9. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  10. Bayesian Kernel Mixtures for Counts

    OpenAIRE

    Canale, Antonio; David B Dunson

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...

  11. Bayesian Optimization for Adaptive MCMC

    OpenAIRE

    Mahendran, Nimalan; Wang, Ziyu; Hamze, Firas; De Freitas, Nando

    2011-01-01

    This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to non-differentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters o...

  12. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;

    2009-01-01

    and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  13. Quantile pyramids for Bayesian nonparametrics

    OpenAIRE

    2009-01-01

    P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...

  14. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  15. Bayesian analysis of contingency tables

    OpenAIRE

    Gómez Villegas, Miguel A.; González Pérez, Beatriz

    2005-01-01

    The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...

  16. Bayesian Credit Ratings (new version)

    OpenAIRE

    Paola Cerchiello; Paolo Giudici

    2013-01-01

    In this contribution we aim at improving ordinal variable selection in the context of causal models. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate, and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric, and, thus, keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal ...

  17. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  18. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  19. Bayesian networks with applications in reliability analysis

    OpenAIRE

    Langseth, Helge

    2002-01-01

    A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and ...

  20. A Maximally Supersymmetric Kondo Model

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC

    2012-02-17

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  1. Maximizing the optical network capacity.

    Science.gov (United States)

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-01

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572

  2. Performance expectation plan

    Energy Technology Data Exchange (ETDEWEB)

    Ray, P.E.

    1998-09-04

    This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.

  3. Query complexity in expectation

    OpenAIRE

    Kaniewski, J.; Lee, Troy; Wolf,

    2014-01-01

    We study the query complexity of computing a function f:{0,1}^n-->R_+ in expectation. This requires the algorithm on input x to output a nonnegative random variable whose expectation equals f(x), using as few queries to the input x as possible. We exactly characterize both the randomized and the quantum query complexity by two polynomial degrees, the nonnegative literal degree and the sum-of-squares degree, respectively. We observe that the quantum complexity can be unboundedly smaller than t...

  4. The ultimatum game and expected utility maximization: In view of attachment theory

    OpenAIRE

    Almakias, Shaul; Avi WEISS

    2009-01-01

    In this paper we import a mainstream psycholgical theory, known as attachment theory, into economics and show the implications of this theory for economic behavior by individuals in the ultimatum bargaining game. Attachment theory examines the psychological tendency to seek proximity to another person, to feel secure when that person is present, and to feel anxious when that person is absent. An individual's attachment style can be classified along two-dimensional axes, one representing attac...

  5. Efficient Retrieval of Text for Biomedical Domain using Expectation Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Sumit Vashishtha

    2011-11-01

    Full Text Available Data mining, a branch of computer science [1], is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligence with database management. Data mining is seen as an increasingly important tool by modern business to transform data into business intelligence giving an informational advantage. Biomedical text retrieval refers to text retrieval techniques applied to biomedical resources and literature available of the biomedical and molecular biology domain. The volume of published biomedical research, and therefore the underlying biomedical knowledge base, is expanding at an increasing rate. Biomedical text retrieval is a way to aid researchers in coping with information overload. By discovering predictive relationships between different pieces of extracted data, data-mining algorithms can be used to improve the accuracy of information extraction. However, textual variation due to typos, abbreviations, and other sources can prevent the productive discovery and utilization of hard-matching rules. Recent methods of soft clustering can exploit predictive relationships in textual data. This paper presents a technique for using soft clustering data mining algorithm to increase the accuracy of biomedical text extraction. Experimental results demonstrate that this approach improves text extraction more effectively that hard keyword matching rules.

  6. Apprehensions and expectations

    DEFF Research Database (Denmark)

    Hansen, Magnus Rotvit Perlt

    2016-01-01

    We report on the initial findings from a qualitative user expectations study of a Patient Data Management System implementation in an Intensive Care Unit in a Swedish hospital. By drawing on grounded theory we take an open focus on the concepts of fears and beliefs and find that specifically the ...

  7. Expecting a Soft Landing

    Institute of Scientific and Technical Information of China (English)

    Lu Zhongyuan

    2011-01-01

    China's recent slovdown is the result of short-term moderation to the overheated real economy and the growth rate is still within a normal range.China's economic growth rate is expected to exceed 9 percent this year.A healthy economic growth rate features fluctuations in a reasonable range that is determined by growth potential.

  8. Heterogeneity in expected longevities.

    Science.gov (United States)

    Pijoan-Mas, Josep; Ríos-Rull, José-Víctor

    2014-12-01

    We develop a new methodology to compute differences in the expected longevity of individuals of a given cohort who are in different socioeconomic groups at a certain age. We address the two main problems associated with the standard use of life expectancy: (1) that people's socioeconomic characteristics change, and (2) that mortality has decreased over time. Our methodology uncovers substantial heterogeneity in expected longevities, yet much less heterogeneity than what arises from the naive application of life expectancy formulae. We decompose the longevity differences into differences in health at age 50, differences in the evolution of health with age, and differences in mortality conditional on health. Remarkably, education, wealth, and income are health-protecting but have very little impact on two-year mortality rates conditional on health. Married people and nonsmokers, however, benefit directly in their immediate mortality. Finally, we document an increasing time trend of the socioeconomic gradient of longevity in the period 1992-2008, and we predict an increase in the socioeconomic gradient of mortality rates for the coming years. PMID:25391225

  9. The subjectivity of scientists and the Bayesian statistical approach

    CERN Document Server

    Press, James S

    2001-01-01

    Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysisScientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often a

  10. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  11. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  12. Bayesian Query-Focused Summarization

    CERN Document Server

    Daumé, Hal

    2009-01-01

    We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.

  13. Numeracy, frequency, and Bayesian reasoning

    Directory of Open Access Journals (Sweden)

    Gretchen B. Chapman

    2009-02-01

    Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.

  14. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  15. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  16. Collaborative Kalman Filtration: Bayesian Perspective

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil

    Lisabon, Portugalsko: Institute for Systems and Technologies of Information, Control and Communication (INSTICC), 2014, s. 468-474. ISBN 978-989-758-039-0. [11th International Conference on Informatics in Control, Automation and Robotics - ICINCO 2014. Vien (AT), 01.09.2014-03.09.2014] R&D Projects: GA ČR(CZ) GP14-06678P Institutional support: RVO:67985556 Keywords : Bayesian analysis * Kalman filter * distributed estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/dedecius-0431324.pdf

  17. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    Energy Technology Data Exchange (ETDEWEB)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  18. Ownership structure, profit maximization, and competitive behavior

    OpenAIRE

    Vroom, Govert; Mccann, Brian T.

    2009-01-01

    We question the broad applicability of the assumption of profit maximization as the goal of the firm and investigate how variance in objective functions across different ownership structures affects competitive behavior. While prior work in agency theory has argued that firms may fail to engage in profit maximizing behaviors due to misalignment between the goals of owners and managers, we contend that we are unlikely to observe pure profit maximizing behavior even in the case of the perfect a...

  19. Maximal inequalities for demimartingales and their applications

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.

  20. Maximal inequalities for demimartingales and their applications

    Institute of Scientific and Technical Information of China (English)

    WANG XueJun; HU ShuHe

    2009-01-01

    In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.

  1. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John A.; BOSSERT, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  2. Optimal Perceived Timing: Integrating Sensory Information with Dynamically Updated Expectations.

    Science.gov (United States)

    Di Luca, Massimiliano; Rhodes, Darren

    2016-01-01

    The environment has a temporal structure, and knowing when a stimulus will appear translates into increased perceptual performance. Here we investigated how the human brain exploits temporal regularity in stimulus sequences for perception. We find that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted. Stimuli presented earlier than expected are perceptually delayed, whereas stimuli presented on time and later than expected are perceptually accelerated. This result suggests that the brain regularizes slightly deviant stimuli with an asymmetry that leads to the perceptual acceleration of expected stimuli. We present a Bayesian model for the combination of dynamically-updated expectations, in the form of a priori probability of encountering future stimuli, with incoming sensory information. The asymmetries in the results are accounted for by the asymmetries in the distributions involved in the computational process. PMID:27385184

  3. Optimal Perceived Timing: Integrating Sensory Information with Dynamically Updated Expectations

    Science.gov (United States)

    Di Luca, Massimiliano; Rhodes, Darren

    2016-01-01

    The environment has a temporal structure, and knowing when a stimulus will appear translates into increased perceptual performance. Here we investigated how the human brain exploits temporal regularity in stimulus sequences for perception. We find that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted. Stimuli presented earlier than expected are perceptually delayed, whereas stimuli presented on time and later than expected are perceptually accelerated. This result suggests that the brain regularizes slightly deviant stimuli with an asymmetry that leads to the perceptual acceleration of expected stimuli. We present a Bayesian model for the combination of dynamically-updated expectations, in the form of a priori probability of encountering future stimuli, with incoming sensory information. The asymmetries in the results are accounted for by the asymmetries in the distributions involved in the computational process. PMID:27385184

  4. Bayesian credible interval construction for Poisson statistics

    Institute of Scientific and Technical Information of China (English)

    ZHU Yong-Sheng

    2008-01-01

    The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.

  5. Bayesian Decision Theoretical Framework for Clustering

    Science.gov (United States)

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  6. Bayesian Statistics for Biological Data: Pedigree Analysis

    Science.gov (United States)

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  7. Using Bayesian Networks to Improve Knowledge Assessment

    Science.gov (United States)

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  8. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....

  9. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and ...

  10. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by eva...

  11. Bayesian analysis of exoplanet and binary orbits

    CERN Document Server

    Schulze-Hartung, Tim; Henning, Thomas

    2012-01-01

    We introduce BASE (Bayesian astrometric and spectroscopic exoplanet detection and characterisation tool), a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The capabilities of BASE are demonstrated using all publicly available data of the binary Mizar A.

  12. Computational methods for Bayesian model choice

    OpenAIRE

    Robert, Christian P.; Wraith, Darren

    2009-01-01

    In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.

  13. Spiking the expectancy profiles

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter;

    statistical learning, causing comparatively sharper key profiles in musicians, we hypothesised that musical learning can be modelled as a process of entropy reduction through experience. Specifically, implicit learning of statistical regularities allows reduction in the relative entropy (i.e. symmetrised...... and relevance of musical training and within-participant decreases after short-term exposure to novel music. Thus, whereas inexperienced listeners make high-entropy predictions, statistical learning over varying timescales enables listeners to generate melodic expectations with reduced entropy...... Kullback-Leibler or Jensen-Shannon Divergence) between listeners’ prior expectancy profiles and probability distributions of a musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians were revisited. In Experiments 1-2 participants...

  14. Spiking the expectancy profiles

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter;

    Melodic expectations have long been quantified using expectedness ratings. Motivated by statistical learning and sharper key profiles in musicians, we model musical learning as a process of reducing the relative entropy between listeners' prior expectancy profiles and probability distributions of a...... given musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians are revisited. Exp. 1-2 used jazz, classical and hymn melodies. Exp. 3-5 collected ratings before and after exposure to 5, 15 or 400 novel melodies generated from a...... finite-state grammar using the Bohlen-Pierce scale. We find group differences in entropy corresponding to degree and relevance of musical training and within-participant decreases after short-term exposure. Thus, whereas inexperienced listeners make high-entropy predictions by default, statistical...

  15. Earnings and Expected Returns

    OpenAIRE

    Owen Lamont

    1996-01-01

    The aggregate dividend payout ratio forecasts aggregate excess returns on both stocks and corporate bonds in post-war US data. Both high corporate profits and high stock prices forecast low excess returns on equities. When the payout ratio is high, expected returns are high. The payout ratio's correlation with business conditions gives it predictive power for returns; it contains information about future stock and bond returns that is not captured by other variables. The payout ratio is usefu...

  16. Directed expected utility networks

    OpenAIRE

    Leonelli, Manuele; Smith, Jim Q.

    2016-01-01

    A variety of statistical graphical models have been defined to represent the conditional independences underlying a random vector of interest. Similarly, many different graphs embedding various types of preferential independences, as for example conditional utility independence and generalized additive independence, have more recently started to appear. In this paper we define a new graphical model, called a directed expected utility network, whose edges depict both probabilistic and utility ...

  17. Preconditioning in Expectation

    OpenAIRE

    Cohen, Michael B.; Kyng, Rasmus; Pachocki, Jakub W.; Peng, Richard; Rao, Anup

    2014-01-01

    We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. When applied to graph Laplacians, this leads to ultra-sparsifiers that in expectation behave as the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC`10]. Combining this with the recursive preconditioning framework by [Spielman-Teng STOC`04] and improved embedding algorithms, this leads to algorithms that solve symmetric diagonally dominant...

  18. Expectation Particle Belief Propagation

    OpenAIRE

    Lienart, Thibaut; Teh, Yee Whye; Doucet, Arnaud

    2015-01-01

    We propose an original particle-based implementation of the Loopy Belief Propagation (LPB) algorithm for pairwise Markov Random Fields (MRF) on a continuous state space. The algorithm constructs adaptively efficient proposal distributions approximating the local beliefs at each note of the MRF. This is achieved by considering proposal distributions in the exponential family whose parameters are updated iterately in an Expectation Propagation (EP) framework. The proposed particle scheme provid...

  19. EXPECTATION SCHOOL ATTENDANCE

    OpenAIRE

    BURGSTALLEROVÁ, Kateřina

    2011-01-01

    In a theoretical part of this work I will map on the basis of work with qoalifications literature the problem of inauguaration school attendance. Describ pre-school child from the porspective of development psychology, will mention posibility of adaptions problems.I will also try to quylify the diference between primary and preprimary education, and conception in document of education program. In practical part I will with aid of quality interview to find out, how is the expectation of future...

  20. Cumulative Conditional Expectation Index

    OpenAIRE

    Fernández, M; González-López, V. A.

    2015-01-01

    In this paper we study the cumulative conditional expectation function (CCEF) in the copula context. It is shown how to compute CCEF in terms of the cumulative copula function, this natural representation allows to deduce some useful properties, for instance with applications to convex combination of copulas. We introduce approximations of CCEF based on Bernstein polynomial copulas. We introduce estimators for CCEF, which were constructed through Bernstein polynomial estimators for copulas. T...

  1. Kitchen Worktop Expectations

    OpenAIRE

    Lövstrand, Christoffer; Nilsson, Daniel

    2013-01-01

    IKEA was founded in 1943 by Ingvar Kamprad and is currently retailing in 44 different countries around the globe. With the implementation of 25 year warranty the importance to validate the quality have increased to satisfy the customer. The aim of this thesis have therefore been to find out the critical factors for kitchen worktops through the expectations of the customers. In addition to this the product development process was investigated to gain an understanding on how IKEA deals with cus...

  2. Expectations for Cancun Conference

    Institute of Scientific and Technical Information of China (English)

    DING ZHITAO

    2010-01-01

    Compared with the great hopes raised by the Copenhagen Climate Conference in 2009, the 2010 UN Climate Change Conference in Cancun aroused fewer expectations. However, the international community is still waiting for a positive outcome that will benefit humankind as a whole. The Cancun conference is another important opportunity for all the participants to advance the Bali Road Map negotiations after last year's meeting in Copenhagen, which failed to reach a legally binding treaty for the years beyond 2012.

  3. Expectations from structural genomics.

    OpenAIRE

    Brenner, S. E.; Levitt, M.

    2000-01-01

    Structural genomics projects aim to provide an experimental structure or a good model for every protein in all completed genomes. Most of the experimental work for these projects will be directed toward proteins whose fold cannot be readily recognized by simple sequence comparison with proteins of known structure. Based on the history of proteins classified in the SCOP structure database, we expect that only about a quarter of the early structural genomics targets will have a new fold. Among ...

  4. Inflation in maximal gauged supergravities

    International Nuclear Information System (INIS)

    We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value sc. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index ns of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range ns=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10−3 and is close to the value in the Starobinsky model

  5. Unsupervised Bayesian decomposition of multiunit EMG recordings using Tabu search.

    Science.gov (United States)

    Ge, Di; Le Carpentier, Eric; Farina, Dario

    2010-03-01

    Intramuscular electromyography (EMG) signals are usually decomposed with semiautomatic procedures that involve the interaction with an expert operator. In this paper, a Bayesian statistical model and a maximum a posteriori (MAP) estimator are used to solve the problem of multiunit EMG decomposition in a fully automatic way. The MAP estimation exploits both the likelihood of the reconstructed EMG signal and some physiological constraints, such as the discharge pattern regularity and the refractory period of muscle fibers, as prior information integrated in a Bayesian framework. A Tabu search is proposed to efficiently tackle the nondeterministic polynomial-time-hard problem of optimization w.r.t the motor unit discharge patterns. The method is fully automatic and was tested on simulated and experimental EMG signals. Compared with the semiautomatic decomposition performed by an expert operator, the proposed method resulted in an accuracy of 90.0% +/- 3.8% when decomposing single-channel intramuscular EMG signals recorded from the abductor digiti minimi muscle at contraction forces of 5% and 10% of the maximal force. The method can also be applied to the automatic identification and classification of spikes from other neural recordings. PMID:19457743

  6. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  7. Flexible and efficient implementations of Bayesian independent component analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    covariance matrix. Parameter optimization is handled by variants of the expectation maximization (EM) algorithm: overrelaxed adaptive EM and the easy gradient recipe. These retain the simplicity of EM while converging faster. The required expectations over the source posterior, the sufficient statistics, are...... estimated with mean field methods: variational and the expectation consistent (EC) framework. We describe the derivation of the EC framework for ICA in detail and give empirical results demonstrating the improved performance. The paper is accompanied by the publicly available Matlab toolbox icaMF. (c) 2007...

  8. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  9. Setting Customer Expectation in Service Delivery: An Integrated Marketing-Operations Perspective

    OpenAIRE

    Teck H. Ho; Yu-Sheng Zheng

    2004-01-01

    Service firms have increasingly been competing for market share on the basis of delivery time. Many firms now choose to set customer expectation by announcing their maximal delivery time. Customers will be satisfied if their perceived delivery times are shorter than their expectations. This gap model of service quality is used in this paper to study how a firm might choose a delivery-time commitment to influence its customer expectation, and delivery quality in order to maximize its market sh...

  10. A survey of Bayesian predictive methods for model assessment, selection and comparison

    Directory of Open Access Journals (Sweden)

    Aki Vehtari

    2012-01-01

    Full Text Available To date, several methods exist in the statistical literature formodel assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methodsare based are not always clearly stated in the original articles, however.The aim of this survey is to provide a unified review of Bayesian predictivemodel assessment and selection methods, and of methods closely related tothem. We review the various assumptions that are made in this context anddiscuss the connections between different approaches, with an emphasis onhow each method approximates the expected utility of using a Bayesianmodel for the purpose of predicting future data.

  11. Maximum-expectation matching under recourse

    OpenAIRE

    Pedroso, João Pedro; Ikeda, Shiro

    2016-01-01

    This paper addresses the problem of maximizing the expected size of a matching in the case of unreliable vertices and/or edges. The assumption is that upon failure, remaining vertices that have not been matched may be subject to a new assignment. This process may be repeated a given number of times, and the objective is to end with the overall maximum number of matched vertices. The origin of this problem is in kidney exchange programs, going on in several countries, where a vertex is an inco...

  12. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199

  13. Modelling and Forecasting Health Expectancy

    OpenAIRE

    Májer, István

    2012-01-01

    textabstractLife expectancy of a human population measures the expected (or average) remaining years of life at a given age. Life expectancy can be defined by two forms of measurement: the period and the cohort life expectancy. The period life expectancy represents the mortality conditions at a specific moment in time, in comparison to the cohort life expectancy, which depicts the life history of a specific group of individuals (born in the same year). Period life expectancies are frequently ...

  14. Glacier Surface Monitoring by Maximizing Mutual Information

    Science.gov (United States)

    Erten, E.; Rossi, C.; Hajnsek, I.

    2012-07-01

    The contribution of Polarimetric Synthetic Aperture Radar (PolSAR) images compared with the single-channel SAR in terms of temporal scene characterization has been found and described to add valuable information in the literature. However, despite a number of recent studies focusing on single polarized glacier monitoring, the potential of polarimetry to estimate the surface velocity of glaciers has not been explored due to the complex mechanism of polarization through glacier/snow. In this paper, a new approach to the problem of monitoring glacier surface velocity is proposed by means of temporal PolSAR images, using a basic concept from information theory: Mutual Information (MI). The proposed polarimetric tracking method applies the MI to measure the statistical dependence between temporal polarimetric images, which is assumed to be maximal if the images are geometrically aligned. Since the proposed polarimetric tracking method is very powerful and general, it can be implemented into any kind of multivariate remote sensing data such as multi-spectral optical and single-channel SAR images. The proposed polarimetric tracking is then used to retrieve surface velocity of Aletsch glacier located in Switzerland and of Inyltshik glacier in Kyrgyzstan with two different SAR sensors; Envisat C-band (single polarized) and DLR airborne L-band (fully polarimetric) systems, respectively. The effect of number of channel (polarimetry) into tracking investigations demonstrated that the presence of snow, as expected, effects the location of the phase center in different polarization, such as glacier tracking with temporal HH compared to temporal VV channels. Shortly, a change in polarimetric signature of the scatterer can change the phase center, causing a question of how much of what I am observing is motion then penetration. In this paper, it is shown that considering the multi-channel SAR statistics, it is possible to optimize the separate these contributions.

  15. Chinese students' great expectations

    DEFF Research Database (Denmark)

    Thøgersen, Stig

    2013-01-01

    to interpret their own educational histories and prior experiences, while at the same time making use of imaginaries of 'Western' education to redefine themselves as independent individuals in an increasingly globalised and individualised world. Through a case study of prospective pre-school teachers preparing...... to study abroad, the article shows how personal, professional and even national goals are closely interwoven. Students expect education abroad to be a personally transformative experience, but rather than defining their goals of individual freedom and creativity in opposition to the authoritarian political...... system, they think of themselves as having a role in the transformation of Chinese attitudes to education and parent-child relations....

  16. Thresholds and expectation thresholds

    OpenAIRE

    Jeff Kahn; Gil Kalai

    2006-01-01

    Consider a random graph G in G(n,p) and the graph property: G contains a copy of a specific graph H. (Note: H depends on n; a motivating example: H is a Hamiltonian cycle.) Let q be the minimal value for which the expected number of copies of H' in G is at least 1/2 for every subgraph H' of H. Let p be the value for which the probability that G contains a copy of H is 1/2. Conjecture: p/q = O(log n). Related conjectures for general Boolean functions, and a possible connection with discrete is...

  17. Agreeing on expectations

    DEFF Research Database (Denmark)

    Nielsen, Christian; Bentsen, Martin Juul

    Commitment and trust are often mentioned as important aspects of creating a perception of reliability between counterparts. In the context of university-industry collaborations (UICs), agreeing on ambitions and expectations are adamant to achieving outcomes that are equally valuable to all parties...... involved. Despite this, our initial probing indicated that such covenants rarely exist. As such, this paper draws on project management theory and proposes the possibility of structuring assessments of potential partners before university-industry collaborations are brought to life. Our analysis suggests...

  18. The maximal forward peak in elastic scattering

    International Nuclear Information System (INIS)

    The maximal sharpness of the forward peak in elastic scattering is constrained by unitarity. This is done by minimizing the average of (t). The data are within 15% o/ the maximal sharpness. Our bound and an earlier lower bound on the forward slope constitute a severe restriction on the shape of forward peak. (author)

  19. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  20. Maximal Hypersurfaces in Spacetimes with Translational Symmetry

    CERN Document Server

    Bulawa, Andrew

    2016-01-01

    We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.

  1. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  2. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  3. Comparison of Bayesian Land Surface Temperature algorithm performance with Terra MODIS observations

    CERN Document Server

    Morgan, J A

    2009-01-01

    An approach to land surface temperature (LST) estimation that relies upon Bayesian inference has been validated against multiband infrared radiometric imagery from the Terra MODIS instrument. Bayesian LST estimators are shown to reproduce standard MODIS product LST values starting from a parsimoniously chosen (hence, uninformative) range of prior band emissivity knowledge. Two estimation methods have been tested. The first is the iterative contraction mapping of joint expectation values for LST and surface emissivity described in a previous paper. In the second method, the Bayesian algorithm is reformulated as a Maximum \\emph{A-Posteriori} (MAP) search for the maximum joint \\emph{a-posteriori} probability for LST, given observed sensor aperture radiances and \\emph{a-priori} probabilities for LST and emissivity. Two MODIS data granules each for daytime and nighttime were used for the comparison. The granules were chosen to be largely cloud-free, with limited vertical relief in those portions of the granules fo...

  4. THE BOUNDEDNESS OF MAXIMAL BOCHNER-RIESZ OPERATOR AND MAXIMAL COMMUTATOR ON MORREY TYPE SPACES

    Institute of Scientific and Technical Information of China (English)

    Yuqing Liu; Dongxiang Chen

    2008-01-01

    The boundedness of maximal Bochner-Riesz operator Bδ*and that of maximal commutator Bb,δ.* generated by this operator and Lipschitz function on the classical Morrey space and generalized Morrey space are established.

  5. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  6. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    Science.gov (United States)

    Chen, Peng; Schwab, Christoph

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov-Galerkin high-fidelity ("HiFi") discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by the so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation

  7. Preliminary investigation to use Bayesian networks in predicting NOx, CO, CO2 and HC emissions

    International Nuclear Information System (INIS)

    A Bayesian network was used to characterize Lister-Petter diesel combustion engine emissions. Three sets of tests were conducted: (1) full open throttle; (2) 68 per cent closed throttle; and (3) 58 per cent closed throttle. The first test simulated normal lean burning conditions, while the last 2 tests simulated a clogged air filter. Experiments were conducted in an engine generator assembly with a fixed speed governor of 1500 rpm. Electrochemical sensors were used to detect nitrogen oxide (NOx); carbon dioxide (CO2); carbon monoxide (CO); hydrocarbons; and particulate matter. Engine oil, engine outlet, and engine inlet and exhaust temperatures were digitally measured. Data from 20 experimental sets of tests were used to train, test and project accurate emission levels. The Bayesian network model was built using input variables and measured output parameters related to the exhaust components. Human knowledge was used to build relationships between defined nodes and a path condition algorithm. An estimation-maximization algorithm was used. Results of the validation study showed that the Bayesian network accurately predicted emissions levels. It was concluded that it is possible to predict engine emission outputs with probable acceptable levels using Bayesian network modelling techniques and limited experimental data. 33 refs., 3 tabs., 8 figs

  8. Bayesian networks in educational assessment

    CERN Document Server

    Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M

    2015-01-01

    Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...

  9. Quantum Bayesianism at the Perimeter

    CERN Document Server

    Fuchs, Christopher A

    2010-01-01

    The author summarizes the Quantum Bayesian viewpoint of quantum mechanics, developed originally by C. M. Caves, R. Schack, and himself. It is a view crucially dependent upon the tools of quantum information theory. Work at the Perimeter Institute for Theoretical Physics continues the development and is focused on the hard technical problem of a finding a good representation of quantum mechanics purely in terms of probabilities, without amplitudes or Hilbert-space operators. The best candidate representation involves a mysterious entity called a symmetric informationally complete quantum measurement. Contemplation of it gives a way of thinking of the Born Rule as an addition to the rules of probability theory, applicable when one gambles on the consequences of interactions with physical systems. The article ends by outlining some directions for future work.

  10. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437

  11. Hedging Strategies for Bayesian Optimization

    CERN Document Server

    Brochu, Eric; de Freitas, Nando

    2010-01-01

    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It is able to do this by sampling the objective using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We describe the method, which we call GP-Hedge, and show that this method almost always outperforms the best individual acquisition function.

  12. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  13. On Bayesian System Reliability Analysis

    International Nuclear Information System (INIS)

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs

  14. Bayesian anti-sparse coding

    CERN Document Server

    Elvira, Clément; Dobigeon, Nicolas

    2015-01-01

    Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an $\\ell_{\\infty}$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The seco...

  15. State Information in Bayesian Games

    CERN Document Server

    Cuff, Paul

    2009-01-01

    Two-player zero-sum repeated games are well understood. Computing the value of such a game is straightforward. Additionally, if the payoffs are dependent on a random state of the game known to one, both, or neither of the players, the resulting value of the game has been analyzed under the framework of Bayesian games. This investigation considers the optimal performance in a game when a helper is transmitting state information to one of the players. Encoding information for an adversarial setting (game) requires a different result than rate-distortion theory provides. Game theory has accentuated the importance of randomization (mixed strategy), which does not find a significant role in most communication modems and source coding codecs. Higher rates of communication, used in the right way, allow the message to include the necessary random component useful in games.

  16. Customer experiences and expectations

    International Nuclear Information System (INIS)

    Customer experiences and expectations from competition and cogeneration in the power industry were reviewed by Charles Morton, Director of Energy at CPC International, by describing Casco's decision to get into cogeneration in the early 1990s in three small corn milling plants in Cardinal, London and Port Colborne, Ontario, mainly as result of the threat of a 40 per cent increase in power prices. He stressed that cost competitiveness of cogeneration is entirely site-specific, but it is generally more attractive in larger facilities that operate 24 hours a day, where grid power is expensive or unreliable. Because it is reliable, cogeneration holds out the prospect of increased production-up time, as well as offering a hedge against higher energy costs, reducing the company's variable costs when incoming revenues fall short of costs, and providing an additional tool in head-to-head competition

  17. Energy providers: customer expectations

    International Nuclear Information System (INIS)

    The deregulation of the gas and electric power industries, and how it will impact on customer service and pricing rates was discussed. This paper described the present situation, reviewed core competencies, and outlined future expectations. The bottom line is that major energy consumers are very conscious of energy costs and go to great lengths to keep them under control. At the same time, solutions proposed to reduce energy costs must benefit all classes of consumers, be they industrial, commercial, institutional or residential. Deregulation and competition at an accelerated pace is the most likely answer. This may be forced by external forces such as foreign energy providers who are eager to enter the Canadian energy market. It is also likely that the competition and convergence between gas and electricity is just the beginning, and may well be overshadowed by other deregulated industries as they determine their core competencies

  18. Cooperative extensions of the Bayesian game

    CERN Document Server

    Ichiishi, Tatsuro

    2006-01-01

    This is the very first comprehensive monograph in a burgeoning, new research area - the theory of cooperative game with incomplete information with emphasis on the solution concept of Bayesian incentive compatible strong equilibrium that encompasses the concept of the Bayesian incentive compatible core. Built upon the concepts and techniques in the classical static cooperative game theory and in the non-cooperative Bayesian game theory, the theory constructs and analyzes in part the powerful n -person game-theoretical model characterized by coordinated strategy-choice with individualistic ince

  19. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  20. Supra-Bayesian Combination of Probability Distributions

    Czech Academy of Sciences Publication Activity Database

    Sečkárová, Vladimíra

    Veszprém : University of Pannonia, 2010, s. 112-117. ISBN 978-615-5044-00-7. [11th International PhD Workshop on Systems and Control. Veszprém (HU), 01.09.2010-03.09.2010] R&D Projects: GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : Supra-Bayesian approach * sharing of probabilistic information * Bayesian decision making Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2010/AS/seckarova-supra-bayesian combination of probability distributions.pdf

  1. Bayesian Soft Sensing in Cold Sheet Rolling

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Jirsa, Ladislav

    Praha: ÚTIA AV ČR, v.v.i, 2010. s. 45-45. [6th International Workshop on Data–Algorithms–Decision Making. 2.12.2010-4.12.2010, Jindřichův Hradec] R&D Projects: GA MŠk(CZ) 7D09008 Institutional research plan: CEZ:AV0Z10750506 Keywords : soft sensor * bayesian statistics * bayesian model averaging Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2010/AS/dedecius-bayesian soft sensing in cold sheet rolling.pdf

  2. Continuous subjective expected utility with non-additive probabilities

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1989-01-01

    textabstractA well-known theorem of Debreu about additive representations of preferences is applied in a non-additive context, to characterize continuous subjective expected utility maximization for the case where the probability measures may be non-additive. The approach of this paper does not need

  3. Experiences and Expectations of Biographical Time among Young Athletes

    OpenAIRE

    Phoenix, Cassandra; Smith, Brett; Sparkes, Andrew C.

    2007-01-01

    Abstract In this article, we explore how biographical time is storied by a particular group of young athletes in relation to their experiences and expectations of embodied ageing. The data suggests that at present, as able and sporting bodies, their everyday experiences are framed by the cyclical, maximizing, and disciplined notions of time associated with the social organization of sport. In their middle y...

  4. The Diagnosis of Reciprocating Machinery by Bayesian Networks

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A Bayesian Network is a reasoning tool based on probability theory and has many advantages that other reasoning tools do not have. This paper discusses the basic theory of Bayesian networks and studies the problems in constructing Bayesian networks. The paper also constructs a Bayesian diagnosis network of a reciprocating compressor. The example helps us to draw a conclusion that Bayesian diagnosis networks can diagnose reciprocating machinery effectively.

  5. Collaborative decision-analytic framework to maximize resilience of tidal marshes to climate change

    Directory of Open Access Journals (Sweden)

    Karen M. Thorne

    2015-03-01

    Full Text Available Decision makers that are responsible for stewardship of natural resources face many challenges, which are complicated by uncertainty about impacts from climate change, expanding human development, and intensifying land uses. A systematic process for evaluating the social and ecological risks, trade-offs, and cobenefits associated with future changes is critical to maximize resilience and conserve ecosystem services. This is particularly true in coastal areas where human populations and landscape conversion are increasing, and where intensifying storms and sea-level rise pose unprecedented threats to coastal ecosystems. We applied collaborative decision analysis with a diverse team of stakeholders who preserve, manage, or restore tidal marshes across the San Francisco Bay estuary, California, USA, as a case study. Specifically, we followed a structured decision-making approach, and we using expert judgment developed alternative management strategies to increase the capacity and adaptability to manage tidal marsh resilience while considering uncertainties through 2050. Because sea-level rise projections are relatively confident to 2050, we focused on uncertainties regarding intensity and frequency of storms and funding. Elicitation methods allowed us to make predictions in the absence of fully compatible models and to assess short- and long-term trade-offs. Specifically we addressed two questions. (1 Can collaborative decision analysis lead to consensus among a diverse set of decision makers responsible for environmental stewardship and faced with uncertainties about climate change, funding, and stakeholder values? (2 What is an optimal strategy for the conservation of tidal marshes, and what strategy is robust to the aforementioned uncertainties? We found that when taking this approach, consensus was reached among the stakeholders about the best management strategies to maintain tidal marsh integrity. A Bayesian decision network revealed that a

  6. Collaborative decision-analytic framework to maximize resilience of tidal marshes to climate change

    Science.gov (United States)

    Thorne, Karen M.; Mattsson, Brady J.; Takekawa, John Y.; Cummings, Jonathan; Crouse, Debby; Block, Giselle; Bloom, Valary; Gerhart, Matt; Goldbeck, Steve; Huning, Beth; Sloop, Christina; Stewart, Mendel; Taylor, Karen; Valoppi, Laura

    2015-01-01

    Decision makers that are responsible for stewardship of natural resources face many challenges, which are complicated by uncertainty about impacts from climate change, expanding human development, and intensifying land uses. A systematic process for evaluating the social and ecological risks, trade-offs, and cobenefits associated with future changes is critical to maximize resilience and conserve ecosystem services. This is particularly true in coastal areas where human populations and landscape conversion are increasing, and where intensifying storms and sea-level rise pose unprecedented threats to coastal ecosystems. We applied collaborative decision analysis with a diverse team of stakeholders who preserve, manage, or restore tidal marshes across the San Francisco Bay estuary, California, USA, as a case study. Specifically, we followed a structured decision-making approach, and we using expert judgment developed alternative management strategies to increase the capacity and adaptability to manage tidal marsh resilience while considering uncertainties through 2050. Because sea-level rise projections are relatively confident to 2050, we focused on uncertainties regarding intensity and frequency of storms and funding. Elicitation methods allowed us to make predictions in the absence of fully compatible models and to assess short- and long-term trade-offs. Specifically we addressed two questions. (1) Can collaborative decision analysis lead to consensus among a diverse set of decision makers responsible for environmental stewardship and faced with uncertainties about climate change, funding, and stakeholder values? (2) What is an optimal strategy for the conservation of tidal marshes, and what strategy is robust to the aforementioned uncertainties? We found that when taking this approach, consensus was reached among the stakeholders about the best management strategies to maintain tidal marsh integrity. A Bayesian decision network revealed that a strategy

  7. An Intuitive Dashboard for Bayesian Network Inference

    Science.gov (United States)

    Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.

    2014-03-01

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.

  8. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  9. An Intuitive Dashboard for Bayesian Network Inference

    International Nuclear Information System (INIS)

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++

  10. Bayesian Control for Concentrating Mixed Nuclear Waste

    OpenAIRE

    Welch, Robert L.; Smith, Clayton

    2013-01-01

    A control algorithm for batch processing of mixed waste is proposed based on conditional Gaussian Bayesian networks. The network is compiled during batch staging for real-time response to sensor input.

  11. Learning Bayesian networks for discrete data

    KAUST Repository

    Liang, Faming

    2009-02-01

    Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.

  12. Expectations from ethics

    International Nuclear Information System (INIS)

    Prof. Patricia Fleming, centred her presentation on ethical expectations in regulating safety for future generations. The challenge is to find a just solution, one that provides for a defensible approach to inter-generational equity. The question on equity is about whether we are permitted to treat generations differently and to still meet the demands of justice. And the question must be asked regarding these differences: 'in what ways do they make a moral difference?' She asked the question regarding the exact meaning of the ethical principle 'Radioactive waste shall be managed in such a way that predicted impacts on the health of future generations will not be greater than relevant levels of impact that are acceptable today'. Some countries have proposed different standards for different time periods, either implicitly or explicitly. In doing so, have they preserved our standards of justice or have they abandoned them? Prof. Fleming identified six points to provide with some moral maps which might be used to negotiate our way to a just solution to the disposal of nuclear waste. (author)

  13. The threshold EM algorithm for parameter learning in bayesian network with incomplete data

    CERN Document Server

    Lamine, Fradj Ben; Mahjoub, Mohamed Ali

    2012-01-01

    Bayesian networks (BN) are used in a big range of applications but they have one issue concerning parameter learning. In real application, training data are always incomplete or some nodes are hidden. To deal with this problem many learning parameter algorithms are suggested foreground EM, Gibbs sampling and RBE algorithms. In order to limit the search space and escape from local maxima produced by executing EM algorithm, this paper presents a learning parameter algorithm that is a fusion of EM and RBE algorithms. This algorithm incorporates the range of a parameter into the EM algorithm. This range is calculated by the first step of RBE algorithm allowing a regularization of each parameter in bayesian network after the maximization step of the EM algorithm. The threshold EM algorithm is applied in brain tumor diagnosis and show some advantages and disadvantages over the EM algorithm.

  14. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  15. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Multimedia

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  16. Corporate governance structure and shareholder wealth maximization

    Directory of Open Access Journals (Sweden)

    Kwadwo Boateng Prempeh

    2015-04-01

    Full Text Available Over the past two decades the ideology of shareholder value has become entrenched as a principle of corporate governance among companies. A well-established corporate governance system suggests effective control and accounting systems, stringent monitoring, effective regulatory mechanism and efficient utilisation of firms’ resources resulting in improved performance. The object of the research presented in this paper is to provide empirical evidence on the effects of corporate governance on shareholder value maximization of the listed companies in Ghana. Data from ten companies listed on Ghana Stock Exchange covering the period 2003 –2007 were used and analysis done within the panel data framework. The dependent variables, dividend per share and dividend yield are used as a measure of shareholder wealth maximization and the relation between corporate governance and shareholder wealth maximization is investigated. The regression results show that both the board size and the independence have statistically significant relationship with shareholder wealth maximization.

  17. Bayesian Variable Selection in Spatial Autoregressive Models

    OpenAIRE

    Jesus Crespo Cuaresma; Philipp Piribauer

    2015-01-01

    This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging tech...

  18. Bayesian Analysis of Multivariate Probit Models

    OpenAIRE

    Siddhartha Chib; Edward Greenberg

    1996-01-01

    This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...

  19. Kernel Bayesian Inference with Posterior Regularization

    OpenAIRE

    Song, Yang; Jun ZHU; Ren, Yong

    2016-01-01

    We propose a vector-valued regression problem whose solution is equivalent to the reproducing kernel Hilbert space (RKHS) embedding of the Bayesian posterior distribution. This equivalence provides a new understanding of kernel Bayesian inference. Moreover, the optimization problem induces a new regularization for the posterior embedding estimator, which is faster and has comparable performance to the squared regularization in kernel Bayes' rule. This regularization coincides with a former th...

  20. Fitness inheritance in the Bayesian optimization algorithm

    OpenAIRE

    Pelikan, Martin; Sastry, Kumara

    2004-01-01

    This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions...