WorldWideScience

Sample records for bayesian expectation maximization

  1. Teams ranking of Malaysia Super League using Bayesian expectation maximization for Generalized Bradley Terry Model

    Science.gov (United States)

    Nor, Shahdiba Binti Md; Mahmud, Zamalia

    2016-10-01

    The analysis of sports data has always aroused great interest among statisticians and sports data have been investigated from different perspectives often aim at forecasting the results. The study focuses on the 12 teams who join the Malaysian Super League (MSL) for season 2015. This paper used Bayesian Expectation Maximization for Generalized Bradley Terry Model to estimate all the football team's rankings. The Generalized Bradley-Terry model is possible to find the maximum likelihood (ML) estimate of the skill ratings λ using a simple iterative procedure. In order to maximize the function of ML, we need inferential bayesian method to get posterior distribution which can be computed quickly. The team's ability was estimated based on the previous year's game results by calculating the probability of winning based on the final scores for each team. It was found that model with tie scores does make a difference in affect the model of estimating the football team's ability in winning the next match. However, team with better results in the previous year has a better chance for scoring in the next game.

  2. Deterministic quantum annealing expectation-maximization algorithm

    Science.gov (United States)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  3. Bilateral Cochlear Implants: Maximizing Expected Outcomes.

    Science.gov (United States)

    Wallis, Kate E; Blum, Nathan J; Waryasz, Stephanie A; Augustyn, Marilyn

    Sonia is a 4 years 1 month-year-old girl with Waardenburg syndrome and bilateral sensorineural hearing loss who had bilateral cochlear implants at 2 years 7 months years of age. She is referred to Developmental-Behavioral Pediatrics by her speech/language pathologist because of concerns that her language skills are not progressing as expected after the cochlear implant. At the time of the implant, she communicated using approximately 20 signs and 1 spoken word (mama). At the time of the evaluation (18 months after the implant) she had approximately 70 spoken words (English and Spanish) and innumerable signs that she used to communicate. She could follow 1-step directions in English but had more difficulty after 2-step directions.Sonia was born in Puerto Rico at 40 weeks gestation after an uncomplicated pregnancy. She failed her newborn hearing test and was given hearing aids that did not seem to help.At age 2 years, Sonia, her mother, and younger sister moved to the United States where she was diagnosed with bilateral severe-to-profound hearing loss. Genetic testing led to a diagnosis of Waardenburg syndrome (group of genetic conditions that can cause hearing loss and changes in coloring [pigmentation] of the hair, skin, and eyes). She received bilateral cochlear implants 6 months later.Sonia's mother is primarily Spanish-speaking and mostly communicates with her in Spanish or with gestures but has recently begun to learn American Sign Language (ASL). In a preschool program at a specialized school for the deaf, Sonia is learning both English and ASL. Sonia seems to prefer to use ASL to communicate.Sonia receives speech and language therapy (SLT) 3 times per week (90 minutes total) individually in school and once per week within a group. She is also receiving outpatient SLT once per week. Therapy sessions are completed in English, with the aid of an ASL interpreter. Sonia's language scores remain low, with her receptive skills in the first percentile, and her

  4. Applications of expectation maximization algorithm for coherent optical communication

    DEFF Research Database (Denmark)

    Carvalho, L.; Oliveira, J.; Zibar, Darko

    2014-01-01

    In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization al...

  5. Expectation-Maximization Binary Clustering for Behavioural Annotation.

    Science.gov (United States)

    Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic

    2016-01-01

    The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.

  6. Expectation-Maximization Binary Clustering for Behavioural Annotation.

    Directory of Open Access Journals (Sweden)

    Joan Garriga

    Full Text Available The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i minimize the need of supervision, (ii reduce computational costs, (iii minimize the need of prior assumptions (e.g. simple parametrizations, and (iv capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC, a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC, a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering. Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.

  7. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    Science.gov (United States)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  8. Expectation-Maximization Tensor Factorization for Practical Location Privacy Attacks

    Directory of Open Access Journals (Sweden)

    Murakami Takao

    2017-10-01

    Full Text Available Location privacy attacks based on a Markov chain model have been widely studied to de-anonymize or de-obfuscate mobility traces. An adversary can perform various kinds of location privacy attacks using a personalized transition matrix, which is trained for each target user. However, the amount of training data available to the adversary can be very small, since many users do not disclose much location information in their daily lives. In addition, many locations can be missing from the training traces, since many users do not disclose their locations continuously but rather sporadically. In this paper, we show that the Markov chain model can be a threat even in this realistic situation. Specifically, we focus on a training phase (i.e. mobility profile building phase and propose Expectation-Maximization Tensor Factorization (EMTF, which alternates between computing a distribution of missing locations (E-step and computing personalized transition matrices via tensor factorization (M-step. Since the time complexity of EMTF is exponential in the number of missing locations, we propose two approximate learning methods, one of which uses the Viterbi algorithm while the other uses the Forward Filtering Backward Sampling (FFBS algorithm. We apply our learning methods to a de-anonymization attack and a localization attack, and evaluate them using three real datasets. The results show that our learning methods significantly outperform a random guess, even when there is only one training trace composed of 10 locations per user, and each location is missing with probability 80% (i.e. even when users hardly disclose two temporally-continuous locations.

  9. Expectation-Maximization-Maximization: A Feasible MLE Algorithm for the Three-Parameter Logistic Model Based on a Mixture Modeling Reformulation.

    Science.gov (United States)

    Zheng, Chanjin; Meng, Xiangbin; Guo, Shaoyang; Liu, Zhengguang

    2017-01-01

    Stable maximum likelihood estimation (MLE) of item parameters in 3PLM with a modest sample size remains a challenge. The current study presents a mixture-modeling approach to 3PLM based on which a feasible Expectation-Maximization-Maximization (EMM) MLE algorithm is proposed. The simulation study indicates that EMM is comparable to the Bayesian EM in terms of bias and RMSE. EMM also produces smaller standard errors (SEs) than MMLE/EM. In order to further demonstrate the feasibility, the method has also been applied to two real-world data sets. The point estimates in EMM are close to those from the commercial programs, BILOG-MG and flexMIRT, but the SEs are smaller.

  10. Expectation-Maximization-Maximization: A Feasible MLE Algorithm for the Three-Parameter Logistic Model Based on a Mixture Modeling Reformulation

    Directory of Open Access Journals (Sweden)

    Chanjin Zheng

    2018-01-01

    Full Text Available Stable maximum likelihood estimation (MLE of item parameters in 3PLM with a modest sample size remains a challenge. The current study presents a mixture-modeling approach to 3PLM based on which a feasible Expectation-Maximization-Maximization (EMM MLE algorithm is proposed. The simulation study indicates that EMM is comparable to the Bayesian EM in terms of bias and RMSE. EMM also produces smaller standard errors (SEs than MMLE/EM. In order to further demonstrate the feasibility, the method has also been applied to two real-world data sets. The point estimates in EMM are close to those from the commercial programs, BILOG-MG and flexMIRT, but the SEs are smaller.

  11. The future of life expectancy and life expectancy inequalities in England and Wales: Bayesian spatiotemporal forecasting.

    Science.gov (United States)

    Bennett, James E; Li, Guangquan; Foreman, Kyle; Best, Nicky; Kontis, Vasilis; Pearson, Clare; Hambly, Peter; Ezzati, Majid

    2015-07-11

    To plan for pensions and health and social services, future mortality and life expectancy need to be forecast. Consistent forecasts for all subnational units within a country are very rare. Our aim was to forecast mortality and life expectancy for England and Wales' districts. We developed Bayesian spatiotemporal models for forecasting of age-specific mortality and life expectancy at a local, small-area level. The models included components that accounted for mortality in relation to age, birth cohort, time, and space. We used geocoded mortality and population data between 1981 and 2012 from the Office for National Statistics together with the model with the smallest error to forecast age-specific death rates and life expectancy to 2030 for 375 of England and Wales' 376 districts. We measured model performance by withholding recent data and comparing forecasts with this withheld data. Life expectancy at birth in England and Wales was 79·5 years (95% credible interval 79·5-79·6) for men and 83·3 years (83·3-83·4) for women in 2012. District life expectancies ranged between 75·2 years (74·9-75·6) and 83·4 years (82·1-84·8) for men and between 80·2 years (79·8-80·5) and 87·3 years (86·0-88·8) for women. Between 1981 and 2012, life expectancy increased by 8·2 years for men and 6·0 years for women, closing the female-male gap from 6·0 to 3·8 years. National life expectancy in 2030 is expected to reach 85·7 (84·2-87·4) years for men and 87·6 (86·7-88·9) years for women, further reducing the female advantage to 1·9 years. Life expectancy will reach or surpass 81·4 years for men and reach or surpass 84·5 years for women in every district by 2030. Longevity inequality across districts, measured as the difference between the 1st and 99th percentiles of district life expectancies, has risen since 1981, and is forecast to rise steadily to 8·3 years (6·8-9·7) for men and 8·3 years (7·1-9·4) for women by 2030. Present forecasts underestimate

  12. Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....

  13. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  14. Incorporating biological prior knowledge for Bayesian learning via maximal knowledge-driven information priors.

    Science.gov (United States)

    Boluki, Shahin; Esfahani, Mohammad Shahrokh; Qian, Xiaoning; Dougherty, Edward R

    2017-12-28

    Phenotypic classification is problematic because small samples are ubiquitous; and, for these, use of prior knowledge is critical. If knowledge concerning the feature-label distribution - for instance, genetic pathways - is available, then it can be used in learning. Optimal Bayesian classification provides optimal classification under model uncertainty. It differs from classical Bayesian methods in which a classification model is assumed and prior distributions are placed on model parameters. With optimal Bayesian classification, uncertainty is treated directly on the feature-label distribution, which assures full utilization of prior knowledge and is guaranteed to outperform classical methods. The salient problem confronting optimal Bayesian classification is prior construction. In this paper, we propose a new prior construction methodology based on a general framework of constraints in the form of conditional probability statements. We call this prior the maximal knowledge-driven information prior (MKDIP). The new constraint framework is more flexible than our previous methods as it naturally handles the potential inconsistency in archived regulatory relationships and conditioning can be augmented by other knowledge, such as population statistics. We also extend the application of prior construction to a multinomial mixture model when labels are unknown, which often occurs in practice. The performance of the proposed methods is examined on two important pathway families, the mammalian cell-cycle and a set of p53-related pathways, and also on a publicly available gene expression dataset of non-small cell lung cancer when combined with the existing prior knowledge on relevant signaling pathways. The new proposed general prior construction framework extends the prior construction methodology to a more flexible framework that results in better inference when proper prior knowledge exists. Moreover, the extension of optimal Bayesian classification to multinomial

  15. Parameter estimation via conditional expectation: a Bayesian inversion

    KAUST Repository

    Matthies, Hermann G.

    2016-08-11

    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp. functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes’s theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations.

  16. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  17. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  18. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  19. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  20. Bayesian projection of life expectancy accounting for the HIV/AIDS epidemic

    Directory of Open Access Journals (Sweden)

    Jessica Godwin

    2017-11-01

    Full Text Available Background: While probabilistic projection methods for projecting life expectancy exist, few account for covariates related to life expectancy. Generalized HIV/AIDS epidemics have a large, immediate negative impact on the life expectancy in a country, but this impact can be mitigated by widespread use of antiretroviral therapy (ART. Thus, projection methods for countries with generalized HIV/AIDS epidemics could be improved by accounting for HIV prevalence, the future course of the epidemic, and ART coverage. Methods: We extend the current Bayesian probabilistic life expectancy projection methods of Raftery et al. (2013 to account for HIV prevalence and adult ART coverage in countries with generalized HIV/AIDS epidemics. Results: We evaluate our method using out-of-sample validation. We find that the proposed method performs better than the method that does not account for HIV prevalence or ART coverage for projections of life expectancy in countries with a generalized epidemic, while projections for countries without an epidemic remain essentially unchanged. Conclusions: In general, our projections show rapid recovery to pre-epidemic life expectancy levels in the presence of widespread ART coverage. After the initial life expectancy recovery, we project a steady rise in life expectancy until the end of the century. Contribution: We develop a simple Bayesian hierarchical model for long-term projections of life expectancy while accounting for HIV/AIDS prevalence and coverage of ART. The method produces well-calibrated projections for countries with generalized HIV/AIDS epidemics up to 2100 while having limited data demands.

  1. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  2. Data Assimilation in Air Contaminant Dispersion Using a Particle Filter and Expectation-Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Rongxiao Wang

    2017-09-01

    Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.

  3. Cosmic microwave background power spectrum estimation and map reconstruction with the expectation-maximization algorithm

    Science.gov (United States)

    Martínez-González, E.; Diego, J. M.; Vielva, P.; Silk, J.

    2003-11-01

    We apply the iterative expectation-maximization algorithm (EM) to estimate the power spectrum of the cosmic microwave background (CMB) from multifrequency microwave maps. In addition, we are also able to provide a reconstruction of the CMB map. By assuming that the combined emission of the foregrounds plus the instrumental noise is Gaussian distributed in Fourier space, we have simplified the EM procedure, finding an analytical expression for the maximization step. By using the simplified expression, the CPU time can be greatly reduced. We test the stability of our power spectrum estimator with realistic simulations of Planck data, including point sources and allowing for spatial variation of the frequency dependence of the Galactic emissions. Without prior information about any of the components, our new estimator can recover the CMB power spectrum up to scales l~ 1500 with less than 10 per cent error. This result is significantly improved if the brightest point sources are removed before applying our estimator. In this way, the CMB power spectrum can be recovered up to l~ 1700 with 10 per cent error and up to l~ 2100 with 50 per cent error. This result is very close to the one that would be obtained in the ideal case of only CMB plus white noise, for which all our assumptions are satisfied. Moreover, the EM algorithm also provides a straightforward mechanism for reconstructing the CMB map. The recovered cosmological signal shows a high degree of correlation (r= 0.98) with the input map and low residuals.

  4. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    International Nuclear Information System (INIS)

    Fujimoto, Kazufumi; Nagai, Hideo; Runggaldier, Wolfgang J.

    2013-01-01

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand it considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).

  5. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  6. An expectation/maximization nuclear vector replacement algorithm for automated NMR resonance assignments

    International Nuclear Information System (INIS)

    Langmead, Christopher James; Donald, Bruce Randall

    2004-01-01

    We report an automated procedure for high-throughput NMR resonance assignment for a protein of known structure, or of an homologous structure. Our algorithm performs Nuclear Vector Replacement (NVR) by Expectation/Maximization (EM) to compute assignments. NVR correlates experimentally-measured NH residual dipolar couplings (RDCs) and chemical shifts to a given a priori whole-protein 3D structural model. The algorithm requires only uniform 15 N-labelling of the protein, and processes unassigned H N - 15 N HSQC spectra, H N - 15 N RDCs, and sparse H N -H N NOE's (d NN s). NVR runs in minutes and efficiently assigns the (H N , 15 N) backbone resonances as well as the sparse d NN s from the 3D 15 N-NOESY spectrum, in O(n 3 ) time. The algorithm is demonstrated on NMR data from a 76-residue protein, human ubiquitin, matched to four structures, including one mutant (homolog), determined either by X-ray crystallography or by different NMR experiments (without RDCs). NVR achieves an average assignment accuracy of over 99%. We further demonstrate the feasibility of our algorithm for different and larger proteins, using different combinations of real and simulated NMR data for hen lysozyme (129 residues) and streptococcal protein G (56 residues), matched to a variety of 3D structural models. Abbreviations: NMR, nuclear magnetic resonance; NVR, nuclear vector replacement; RDC, residual dipolar coupling; 3D, three-dimensional; HSQC, heteronuclear single-quantum coherence; H N , amide proton; NOE, nuclear Overhauser effect; NOESY, nuclear Overhauser effect spectroscopy; d NN , nuclear Overhauser effect between two amide protons; MR, molecular replacement; SAR, structure activity relation; DOF, degrees of freedom; nt., nucleotides; SPG, Streptococcal protein G; SO(3), special orthogonal (rotation) group in 3D; EM, Expectation/Maximization; SVD, singular value decomposition

  7. AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization

    Science.gov (United States)

    Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui

    High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem

  8. An efficient forward-reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Vilanova, Pedro

    2016-01-07

    In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  9. An efficient forward–reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Bayer, Christian

    2016-02-20

    © 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  10. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  11. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    International Nuclear Information System (INIS)

    Kim, Dae Won

    2005-01-01

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  12. Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework.

    Science.gov (United States)

    Jain, Saurabh; Ribbens, Annemie; Sima, Diana M; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2016-01-01

    Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox.

  13. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  14. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  15. Cerebrospinal fluid image segmentation using spatial fuzzy clustering method with improved evolutionary Expectation Maximization.

    Science.gov (United States)

    Abdullah, Afnizanfaizal; Hirayama, Akihiro; Yatsushiro, Satoshi; Matsumae, Mitsunori; Kuroda, Kagayaki

    2013-01-01

    Visualization of cerebrospinal fluid (CSF), that flow in the brain and spinal cord, plays an important role to detect neurodegenerative diseases such as Alzheimer's disease. This is performed by measuring the substantial changes in the CSF flow dynamics, volume and/or pressure gradient. Magnetic resonance imaging (MRI) technique has become a prominent tool to quantitatively measure these changes and image segmentation method has been widely used to distinguish the CSF flows from the brain tissues. However, this is often hampered by the presence of partial volume effect in the images. In this paper, a new hybrid evolutionary spatial fuzzy clustering method is introduced to overcome the partial volume effect in the MRI images. The proposed method incorporates Expectation Maximization (EM) method, which is improved by the evolutionary operations of the Genetic Algorithm (GA) to differentiate the CSF from the brain tissues. The proposed improvement is incorporated into a spatial-based fuzzy clustering (SFCM) method to improve segmentation of the boundary curve of the CSF and the brain tissues. The proposed method was validated using MRI images of Alzheimer's disease patient. The results presented that the proposed method is capable to filter the CSF regions from the brain tissues more effectively compared to the standard EM, FCM, and SFCM methods.

  16. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  17. Subset construction strategy for ordered-subset expectation-maximization reconstruction in compton camera imaging system

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Soo Jin

    2007-01-01

    In this study, the expectation maximization (EM) and ordered subset EM (OSEM) reconstruction algorithms have been applied to the Compton projection data. For OSEM, we propose the several methods for constructing subsets and compare the impact of the each method on the reconstructed images to choose the proper subset construction method. A Compton camera was consisted of three pairs of scatterer and absorber detectors which were parallel to each other. The detector pairs were positioned along the x-, y-, and z-axes at the radial offset of 10 em. The 3-directional projection data of 5-cylinder software phantom (64x64x64 array, 1.56mm) was obtained from a Compton projector. In this study, we used the iterative reconstruction algorithms such as EM and OSEM. For application of OSEM algorithm to the Compton camera, we proposed three strategies for constructing the exclusive subsets; scattering angle-based subsets (OSEM-SA), detector-position-based subsets (OSEM-DP), and both scattering angle- and detector-position-based subsets (OSEM-AP). The OSEM with 16, 64 and 128 subsets were performed through 16, 4, and 2 iterations, respectively. The OSEM with 16 subsets and 4 iterations was equivalent to the EM with 64 iterations, but the computation time was approximately reduced by 14 times. All the three schemes for choosing the subsets in OSEM algorithm yielded similar results in computation time, but the percent error for OSEM-SA was slightly larger than others. No significant change of percent error as the subset number increased up to 128 was observed. The simulation results showed that the EM reconstruction algorithm is applicable to the Compton camera data with sufficient counting statistics. OSEM significantly improved the computational efficiency and maintained the image quality of the standard EM reconstruction. The OSEM algorithm which included subsets on the detection positions (OSEM-DP and OSEM-AP) provided slightly better results than OSEM-SA

  18. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  19. Expecting the unexpected: applying the Develop-Distort Dilemma to maximize positive market impacts in health.

    Science.gov (United States)

    Peters, David H; Paina, Ligia; Bennett, Sara

    2012-10-01

    Although health interventions start with good intentions to develop services for disadvantaged populations, they often distort the health market, making the delivery or financing of services difficult once the intervention is over: a condition called the 'Develop-Distort Dilemma' (DDD). In this paper, we describe how to examine whether a proposed intervention may develop or distort the health market. Our goal is to produce a tool that facilitates meaningful and systematic dialogue for practitioners and researchers to ensure that well-intentioned health interventions lead to productive health systems while reducing the undesirable distortions of such efforts. We apply the DDD tool to plan for development rather than distortions in health markets, using intervention research being conducted under the Future Health Systems consortium in Bangladesh, China and Uganda. Through a review of research proposals and interviews with principal investigators, we use the DDD tool to systematically understand how a project fits within the broader health market system, and to identify gaps in planning for sustainability. We found that while current stakeholders and funding sources for activities were easily identified, future ones were not. The implication is that the projects could raise community expectations that future services will be available and paid for, despite this actually being uncertain. Each project addressed the 'rules' of the health market system differently. The China research assesses changes in the formal financing rules, whereas Bangladesh and Uganda's projects involve influencing community level providers, where informal rules are more important. In each case, we recognize the importance of building trust between providers, communities and government officials. Each project could both develop and distort local health markets. Anyone intervening in the health market must recognize the main market perturbations, whether positive or negative, and manage them so

  20. Inversion and uncertainty of highly parameterized models in a Bayesian framework by sampling the maximal conditional posterior distribution of parameters

    Science.gov (United States)

    Mara, Thierry A.; Fajraoui, Noura; Younes, Anis; Delay, Frederick

    2015-02-01

    We introduce the concept of maximal conditional posterior distribution (MCPD) to assess the uncertainty of model parameters in a Bayesian framework. Although, Markov Chains Monte Carlo (MCMC) methods are particularly suited for this task, they become challenging with highly parameterized nonlinear models. The MCPD represents the conditional probability distribution function of a given parameter knowing that the other parameters maximize the conditional posterior density function. Unlike MCMC which accepts or rejects solutions sampled in the parameter space, MCPD is calculated through several optimization processes. Model inversion using MCPD algorithm is particularly useful for highly parameterized problems because calculations are independent. Consequently, they can be evaluated simultaneously with a multi-core computer. In the present work, the MCPD approach is applied to invert a 2D stochastic groundwater flow problem where the log-transmissivity field of the medium is inferred from scarce and noisy data. For this purpose, the stochastic field is expanded onto a set of orthogonal functions using a Karhunen-Loève (KL) transformation. Though the prior guess on the stochastic structure (covariance) of the transmissivity field is erroneous, the MCPD inference of the KL coefficients is able to extract relevant inverse solutions.

  1. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    Science.gov (United States)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  2. Bayesian Filtering for Phase Noise Characterization and Carrier Synchronization of up to 192 Gb/s PDM 64-QAM

    DEFF Research Database (Denmark)

    Zibar, Darko; Carvalho, L.; Piels, Molly

    2014-01-01

    We show that phase noise estimation based on Bayesian filtering outperforms conventional time-domain approaches in the presence of moderate measurement noise. Additionally, carrier synchronization based on Bayesian filtering, in combination with expectation maximization, is demonstrated for the f...

  3. Very slow search and reach: failure to maximize expected gain in an eye-hand coordination task.

    Directory of Open Access Journals (Sweden)

    Hang Zhang

    Full Text Available We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt.

  4. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels

    KAUST Repository

    Shaqfeh, Mohammad

    2016-05-25

    In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.

  5. Trend analysis of the power law process using Expectation-Maximization algorithm for data censored by inspection intervals

    International Nuclear Information System (INIS)

    Taghipour, Sharareh; Banjevic, Dragan

    2011-01-01

    Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.

  6. Estimation of expected number of accidents and workforce unavailability through Bayesian population variability analysis and Markov-based model

    International Nuclear Information System (INIS)

    Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier

    2016-01-01

    Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.

  7. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    Energy Technology Data Exchange (ETDEWEB)

    Endrizzi, M., E-mail: m.endrizzi@ucl.ac.uk [Dipartimento di Fisica, Università di Siena, Via Roma 56, 53100 Siena (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Delogu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dipartimento di Fisica “E. Fermi”, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Oliva, P. [Dipartimento di Chimica e Farmacia, Università di Sassari, via Vienna 2, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, s.p. per Monserrato-Sestu Km 0.700, 09042 Monserrato (Italy)

    2014-12-01

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available.

  8. Future life expectancy in 35 industrialised countries: projections with a Bayesian model ensemble.

    Science.gov (United States)

    Kontis, Vasilis; Bennett, James E; Mathers, Colin D; Li, Guangquan; Foreman, Kyle; Ezzati, Majid

    2017-04-01

    Projections of future mortality and life expectancy are needed to plan for health and social services and pensions. Our aim was to forecast national age-specific mortality and life expectancy using an approach that takes into account the uncertainty related to the choice of forecasting model. We developed an ensemble of 21 forecasting models, all of which probabilistically contributed towards the final projections. We applied this approach to project age-specific mortality to 2030 in 35 industrialised countries with high-quality vital statistics data. We used age-specific death rates to calculate life expectancy at birth and at age 65 years, and probability of dying before age 70 years, with life table methods. Life expectancy is projected to increase in all 35 countries with a probability of at least 65% for women and 85% for men. There is a 90% probability that life expectancy at birth among South Korean women in 2030 will be higher than 86·7 years, the same as the highest worldwide life expectancy in 2012, and a 57% probability that it will be higher than 90 years. Projected female life expectancy in South Korea is followed by those in France, Spain, and Japan. There is a greater than 95% probability that life expectancy at birth among men in South Korea, Australia, and Switzerland will surpass 80 years in 2030, and a greater than 27% probability that it will surpass 85 years. Of the countries studied, the USA, Japan, Sweden, Greece, Macedonia, and Serbia have some of the lowest projected life expectancy gains for both men and women. The female life expectancy advantage over men is likely to shrink by 2030 in every country except Mexico, where female life expectancy is predicted to increase more than male life expectancy, and in Chile, France, and Greece where the two sexes will see similar gains. More than half of the projected gains in life expectancy at birth in women will be due to enhanced longevity above age 65 years. There is more than a 50% probability

  9. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    Science.gov (United States)

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  10. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  11. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    Science.gov (United States)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  12. A hybrid expectation maximisation and MCMC sampling algorithm to implement Bayesian mixture model based genomic prediction and QTL mapping.

    Science.gov (United States)

    Wang, Tingting; Chen, Yi-Ping Phoebe; Bowman, Phil J; Goddard, Michael E; Hayes, Ben J

    2016-09-21

    Bayesian mixture models in which the effects of SNP are assumed to come from normal distributions with different variances are attractive for simultaneous genomic prediction and QTL mapping. These models are usually implemented with Monte Carlo Markov Chain (MCMC) sampling, which requires long compute times with large genomic data sets. Here, we present an efficient approach (termed HyB_BR), which is a hybrid of an Expectation-Maximisation algorithm, followed by a limited number of MCMC without the requirement for burn-in. To test prediction accuracy from HyB_BR, dairy cattle and human disease trait data were used. In the dairy cattle data, there were four quantitative traits (milk volume, protein kg, fat% in milk and fertility) measured in 16,214 cattle from two breeds genotyped for 632,002 SNPs. Validation of genomic predictions was in a subset of cattle either from the reference set or in animals from a third breeds that were not in the reference set. In all cases, HyB_BR gave almost identical accuracies to Bayesian mixture models implemented with full MCMC, however computational time was reduced by up to 1/17 of that required by full MCMC. The SNPs with high posterior probability of a non-zero effect were also very similar between full MCMC and HyB_BR, with several known genes affecting milk production in this category, as well as some novel genes. HyB_BR was also applied to seven human diseases with 4890 individuals genotyped for around 300 K SNPs in a case/control design, from the Welcome Trust Case Control Consortium (WTCCC). In this data set, the results demonstrated again that HyB_BR performed as well as Bayesian mixture models with full MCMC for genomic predictions and genetic architecture inference while reducing the computational time from 45 h with full MCMC to 3 h with HyB_BR. The results for quantitative traits in cattle and disease in humans demonstrate that HyB_BR can perform equally well as Bayesian mixture models implemented with full MCMC in

  13. Fast Estimation of Expected Information Gain for Bayesian Experimental Design Based on Laplace Approximation

    KAUST Repository

    Long, Quan

    2014-01-06

    Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.

  14. Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations

    KAUST Repository

    Long, Quan

    2013-06-01

    Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.

  15. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm

    Science.gov (United States)

    Papaconstadopoulos, P.; Levesque, I. R.; Maglieri, R.; Seuntjens, J.

    2016-02-01

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size (0.5× 0.5 cm2). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  16. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    Science.gov (United States)

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  17. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  18. Hybrid metaheuristic approaches to the expectation maximization for estimation of the hidden Markov model for signal modeling.

    Science.gov (United States)

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2014-10-01

    The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well.

  19. Expectations

    DEFF Research Database (Denmark)

    depend on the reader’s own experiences, individual feelings, personal associations or on conventions of reading, interpretive communities and cultural conditions? This volume brings together narrative theory, fictionality theory and speech act theory to address such questions of expectations...... and intentionality in relation to narrative and fiction....

  20. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  1. Comparison of different types of commercial filtered backprojection and ordered-subset expectation maximization SPECT reconstruction software.

    Science.gov (United States)

    Seret, Alain; Forthomme, Julien

    2009-09-01

    The aim of this study was to compare the performance of filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM) reconstruction algorithms available in several types of commercial SPECT software. Numeric simulations of SPECT acquisitions of 2 phantoms were used: the National Electrical Manufacturers Association line phantom used for the assessment of SPECT resolution and a phantom with uniform, hot-rod, and cold-rod compartments. For FBP, no filtering and filtering of the projections with either a Butterworth filter (order 3 or 6) or a Hanning filter at various cutoff frequencies were considered. For OSEM, the number of subsets was 1, 4, 8, or 16, and the number of iterations was chosen to obtain a product number of iterations times the number of subsets equal to 16, 32, 48, or 64. The line phantom enabled us to obtain the reconstructed central, radial, and tangential full width at half maximum. The uniform compartment of the second phantom delivered the reconstructed mean pixel counts and SDs from which the coefficients of variation were calculated. Hot contrast and cold contrast were obtained from its rod compartments. For FBP, the full width at half maximum, mean pixel count, coefficient of variation, and contrast were almost software independent. The only exceptions were a smaller (by 0.5 mm) full width at half maximum for one of the software types, higher mean pixel counts for 2 of the software types, and better contrast for 2 of the software types under some filtering conditions. For OSEM, the full width at half maximum differed by 0.1-2.5 mm with the different types of software but was almost independent of the number of subsets or iterations. There was a marked dependence of the mean pixel count on the type of software used, and there was a moderate dependence of the coefficient of variation. Contrast was almost software independent. The mean pixel count varied greatly with the number of iterations for 2 of the software types, and

  2. Deriving the largest expected number of elementary particles in the standard model from the maximal compact subgroup H of the exceptional Lie group E7(-5)

    International Nuclear Information System (INIS)

    El Naschie, M.S.

    2008-01-01

    The maximal number of elementary particles which could be expected to be found within a modestly extended energy scale of the standard model was found using various methods to be N = 69. In particular using E-infinity theory the present Author found the exact transfinite expectation value to be =α-bar o /2≅69 where α-bar o =137.082039325 is the exact inverse fine structure constant. In the present work we show among other things how to derive the exact integer value 69 from the exceptional Lie symmetry groups hierarchy. It is found that the relevant number is given by dim H = 69 where H is the maximal compact subspace of E 7(-5) so that N = dim H = 69 while dim E 7 = 133

  3. Clinical evaluation of iterative reconstruction (ordered-subset expectation maximization) in dynamic positron emission tomography: quantitative effects on kinetic modeling with N-13 ammonia in healthy subjects

    DEFF Research Database (Denmark)

    Hove, Jens Dahlgaard; Rasmussen, R.; Freiberg, J.

    2008-01-01

    BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...

  4. Impact of prior assumptions on Bayesian estimates of inflation parameters and the expected gravitational waves signal from inflation

    International Nuclear Information System (INIS)

    Valkenburg, Wessel; Hamann, Jan; Krauss, Lawrence M.

    2008-01-01

    There has been much recent discussion, and some confusion, regarding the use of existing observational data to estimate the likelihood that next-generation cosmic microwave background (CMB) polarization experiments might detect a nonzero tensor signal, possibly associated with inflation. We examine this issue in detail here in two different ways: (1) first we explore the effect of choice of different parameter priors on the estimation of the tensor-to-scalar ratio r and other parameters describing inflation, and (2) we examine the Bayesian complexity in order to determine how effectively existing data can constrain inflationary parameters. We demonstrate that existing data are not strong enough to render full inflationary parameter estimates in a parametrization- and prior-independent way and that the predicted tensor signal is particularly sensitive to different priors. For parametrizations where the Bayesian complexity is comparable to the number of free parameters we find that a flat prior on the scale of inflation (which is to be distinguished from a flat prior on the tensor-to-scalar ratio) leads us to infer a larger, and in fact slightly nonzero tensor contribution at 68% confidence level. However, no detection is claimed. Our results demonstrate that all that is statistically relevant at the current time is the (slightly enhanced) upper bound on r, and we stress that the data remain consistent with r=0.

  5. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  6. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  7. Energy distribution of the neutron flux measurements at the Chilean Reactor RECH-1 using multi-foil neutron activation and the Expectation Maximization unfolding algorithm.

    Science.gov (United States)

    Molina, F; Aguilera, P; Romero-Barrientos, J; Arellano, H F; Agramunt, J; Medel, J; Morales, J R; Zambra, M

    2017-11-01

    We present a methodology to obtain the energy distribution of the neutron flux of an experimental nuclear reactor, using multi-foil activation measurements and the Expectation Maximization unfolding algorithm, which is presented as an alternative to well known unfolding methods such as GRAVEL. Self-shielding flux corrections for energy bin groups were obtained using MCNP6 Monte Carlo simulations. We have made studies at the at the Dry Tube of RECH-1 obtaining fluxes of 1.5(4)×10 13 cm -2 s -1 for the thermal neutron energy region, 1.9(5)×10 12 cm -2 s -1 for the epithermal neutron energy region, and 4.3(11)×10 11 cm -2 s -1 for the fast neutron energy region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Bayesian estimation of Weibull distribution parameters

    International Nuclear Information System (INIS)

    Bacha, M.; Celeux, G.; Idee, E.; Lannoy, A.; Vasseur, D.

    1994-11-01

    In this paper, we expose SEM (Stochastic Expectation Maximization) and WLB-SIR (Weighted Likelihood Bootstrap - Sampling Importance Re-sampling) methods which are used to estimate Weibull distribution parameters when data are very censored. The second method is based on Bayesian inference and allow to take into account available prior informations on parameters. An application of this method, with real data provided by nuclear power plants operation feedback analysis has been realized. (authors). 8 refs., 2 figs., 2 tabs

  9. A Gap-Filling Procedure for Hydrologic Data Based on Kalman Filtering and Expectation Maximization: Application to Data from the Wireless Sensor Networks of the Sierra Nevada

    Science.gov (United States)

    Coogan, A.; Avanzi, F.; Akella, R.; Conklin, M. H.; Bales, R. C.; Glaser, S. D.

    2017-12-01

    Automatic meteorological and snow stations provide large amounts of information at dense temporal resolution, but data quality is often compromised by noise and missing values. We present a new gap-filling and cleaning procedure for networks of these stations based on Kalman filtering and expectation maximization. Our method utilizes a multi-sensor, regime-switching Kalman filter to learn a latent process that captures dependencies between nearby stations and handles sharp changes in snowfall rate. Since the latent process is inferred using observations across working stations in the network, it can be used to fill in large data gaps for a malfunctioning station. The procedure was tested on meteorological and snow data from Wireless Sensor Networks (WSN) in the American River basin of the Sierra Nevada. Data include air temperature, relative humidity, and snow depth from dense networks of 10 to 12 stations within 1 km2 swaths. Both wet and dry water years have similar data issues. Data with artificially created gaps was used to quantify the method's performance. Our multi-sensor approach performs better than a single-sensor one, especially with large data gaps, as it learns and exploits the dominant underlying processes in snowpack at each site.

  10. Evaluation of tomographic image quality of extended and conventional parallel hole collimators using maximum likelihood expectation maximization algorithm by Monte Carlo simulations.

    Science.gov (United States)

    Moslemi, Vahid; Ashoor, Mansour

    2017-10-01

    One of the major problems associated with parallel hole collimators (PCs) is the trade-off between their resolution and sensitivity. To solve this problem, a novel PC - namely, extended parallel hole collimator (EPC) - was proposed, in which particular trapezoidal denticles were increased upon septa on the side of the detector. In this study, an EPC was designed and its performance was compared with that of two PCs, PC35 and PC41, with a hole size of 1.5 mm and hole lengths of 35 and 41 mm, respectively. The Monte Carlo method was used to calculate the important parameters such as resolution, sensitivity, scattering, and penetration ratio. A Jaszczak phantom was also simulated to evaluate the resolution and contrast of tomographic images, which were produced by the EPC6, PC35, and PC41 using the Monte Carlo N-particle version 5 code, and tomographic images were reconstructed by using maximum likelihood expectation maximization algorithm. Sensitivity of the EPC6 was increased by 20.3% in comparison with that of the PC41 at the identical spatial resolution and full-width at tenth of maximum here. Moreover, the penetration and scattering ratio of the EPC6 was 1.2% less than that of the PC41. The simulated phantom images show that the EPC6 increases contrast-resolution and contrast-to-noise ratio compared with those of PC41 and PC35. When compared with PC41 and PC35, EPC6 improved trade-off between resolution and sensitivity, reduced penetrating and scattering ratios, and produced images with higher quality. EPC6 can be used to increase detectability of more details in nuclear medicine images.

  11. Is there a role for expectation maximization imputation in addressing missing data in research using WOMAC questionnaire? Comparison to the standard mean approach and a tutorial

    Directory of Open Access Journals (Sweden)

    Rutledge John

    2011-05-01

    Full Text Available Abstract Background Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM and the mean imputation method for WOMAC in a cohort of total hip replacement patients. Methods WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC analysis and the true score (TS. This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%. Results Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased. Conclusions The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate

  12. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...

  13. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...

  14. Chasing the deal with the money: Measuring the required risk premium and expected abnormal returns of private equity funds to maximize their internal rate of return

    Directory of Open Access Journals (Sweden)

    Fernando Scarpati

    2013-09-01

    Full Text Available A number of scholars of private equity (“PE” have attempted to assess the ex-post returns, or performance, of PEs by adopting an ex-post perspective of asset pricing. In doing so a set of phenomena has been recognized that is thought to be specific to the PE sector, such as “money-chasing deal phenomenon” (Gompers and Lerner, 2000 and “performance persistence” (Lerner and Schoar, 2005. However, based on their continuing use of an ex-post perspective, few scholars have paid attention to the possible extent to which these and other PE phenomena may affect expected returns from PE investments. To address this problem this article draws on an ex-ante perspective of investment decision-making in suggesting how a number of drivers and factors of PE phenomena may produce “abnormal returns”, and that each of those drivers and factors should therefore be considered in accurately assessing the required risk premium and expected abnormal returns of PE investments. In making these contributions we examined a private equity investment of a regional PE in Italy and administered a telephone questionnaire to 40 PEs in Italy and the UK and found principally that while size is the most important driver in producing abnormal returns illiquidity alone cannot explain the expected returns of PE investments (cf. Franzoni et al., 2012. Based on our findings we developed a predictive model of PE decision-making that draws on an ex-ante perspective of asset pricing and takes into account PE phenomena and abnormal returns. This model extends the work of Franzoni et al. (2012, Jegadeesh et al. (2009, and Korteweg and Sorensen (2010 who did not consider the possible influence of PE phenomena in decision-making and will also help PE managers in making better-informed decisions.

  15. Expectant Mothers Maximizing Opportunities: Maternal Characteristics Moderate Multifactorial Prenatal Stress in the Prediction of Birth Weight in a Sample of Children Adopted at Birth.

    Directory of Open Access Journals (Sweden)

    Line Brotnow

    Full Text Available Mothers' stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1 multifactorial prenatal stress, integrating objective "stressors" and subjective "distress" and 2 the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits on infant birthweight.Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study.Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398 = 6.782, p = .001. The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394 = 5.339, p < .001. Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389 = 3.413, p = .009. Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins.The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight.

  16. Expectant Mothers Maximizing Opportunities: Maternal Characteristics Moderate Multifactorial Prenatal Stress in the Prediction of Birth Weight in a Sample of Children Adopted at Birth.

    Science.gov (United States)

    Brotnow, Line; Reiss, David; Stover, Carla S; Ganiban, Jody; Leve, Leslie D; Neiderhiser, Jenae M; Shaw, Daniel S; Stevens, Hanna E

    2015-01-01

    Mothers' stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective "stressors" and subjective "distress" and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p < .001). Further, maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight.

  17. Maximal ? -regularity

    NARCIS (Netherlands)

    Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.

    2015-01-01

    In this paper, we prove maximal regularity estimates in “square function spaces” which are commonly used in harmonic analysis, spectral theory, and stochastic analysis. In particular, they lead to a new class of maximal regularity results for both deterministic and stochastic equations in L p

  18. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    International Nuclear Information System (INIS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-01-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm

  19. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  20. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Science.gov (United States)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  1. Entropy maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  2. Maximal balleans

    Directory of Open Access Journals (Sweden)

    Olga I. Protasova

    2006-10-01

    Full Text Available A ballean is a set X endowed with some family of subsets of X which are called the balls. We postulate the properties of the family of balls in such a way that the balleans with the appropriate morphisms can be considered as the asymptotic counterparts of the uniform topological spaces. The purpose of the paper is to find and study the asymptotic counterparts for maximal topological spaces and maximal topological groups.

  3. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  4. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  5. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  6. Optimization of plasma diagnostics using Bayesian probability theory

    Science.gov (United States)

    Dreier, H.; Fischer, R.; Dinklage, A.; Hirsch, M.; Kornejew, P.

    2006-11-01

    The diagnostic set-up for Wendelstein 7-X, a magnetic fusion device presently under construction, is currently in the design process to optimize the outcome under given technical constraints. Compared to traditional design approaches, Bayesian Experimental Design (BED) allows to optimize with respect to physical motivated design criterions. It aims to find the optimal design by maximizing an expected utility function that quantifies the goals of the experiment. The expectation marginalizes over the uncertain physical parameters and the possible values of future data. The approach presented here bases on maximization of an information measure (Kullback-Leibler entropy). As an example, the optimization of an infrared multichannel interferometer is shown in detail. Design aspects like the impact of technical restrictions are discussed.

  7. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  8. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    Directory of Open Access Journals (Sweden)

    Jisheng Dai

    2015-10-01

    Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  9. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  10. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  11. Bayesian natural language semantics and pragmatics

    CERN Document Server

    Zeevat, Henk

    2015-01-01

    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.

  12. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  13. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  14. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  15. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  16. A Bayesian Reflection on Surfaces

    Directory of Open Access Journals (Sweden)

    David R. Wolf

    1999-10-01

    Full Text Available Abstract: The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data, is presented.

  17. Bayesian networks: a combined tuning heuristic

    NARCIS (Netherlands)

    Bolt, J.H.

    2016-01-01

    One of the issues in tuning an output probability of a Bayesian network by changing multiple parameters is the relative amount of the individual parameter changes. In an existing heuristic parameters are tied such that their changes induce locally a maximal change of the tuned probability. This

  18. Poles tracking of weakly nonlinear structures using a Bayesian smoothing method

    Science.gov (United States)

    Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc

    2017-02-01

    This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.

  19. BayesLCA: An R Package for Bayesian Latent Class Analysis

    Directory of Open Access Journals (Sweden)

    Arthur White

    2014-11-01

    Full Text Available The BayesLCA package for R provides tools for performing latent class analysis within a Bayesian setting. Three methods for fitting the model are provided, incorporating an expectation-maximization algorithm, Gibbs sampling and a variational Bayes approximation. The article briefly outlines the methodology behind each of these techniques and discusses some of the technical difficulties associated with them. Methods to remedy these problems are also described. Visualization methods for each of these techniques are included, as well as criteria to aid model selection.

  20. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  1. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    Barat, Eric; Dautremer, Thomas

    2007-01-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on R k (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  2. Root Sparse Bayesian Learning for Off-Grid DOA Estimation

    Science.gov (United States)

    Dai, Jisheng; Bao, Xu; Xu, Weichao; Chang, Chunqi

    2017-01-01

    The performance of the existing sparse Bayesian learning (SBL) methods for off-gird DOA estimation is dependent on the trade off between the accuracy and the computational workload. To speed up the off-grid SBL method while remain a reasonable accuracy, this letter describes a computationally efficient root SBL method for off-grid DOA estimation, where a coarse refinable grid, whose sampled locations are viewed as the adjustable parameters, is adopted. We utilize an expectation-maximization (EM) algorithm to iteratively refine this coarse grid, and illustrate that each updated grid point can be simply achieved by the root of a certain polynomial. Simulation results demonstrate that the computational complexity is significantly reduced and the modeling error can be almost eliminated.

  3. Optimal execution in high-frequency trading with Bayesian learning

    Science.gov (United States)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  4. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  5. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  6. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  7. Optimal Bayesian Experimental Design for Combustion Kinetics

    KAUST Repository

    Huan, Xun

    2011-01-04

    Experimental diagnostics play an essential role in the development and refinement of chemical kinetic models, whether for the combustion of common complex hydrocarbons or of emerging alternative fuels. Questions of experimental design—e.g., which variables or species to interrogate, at what resolution and under what conditions—are extremely important in this context, particularly when experimental resources are limited. This paper attempts to answer such questions in a rigorous and systematic way. We propose a Bayesian framework for optimal experimental design with nonlinear simulation-based models. While the framework is broadly applicable, we use it to infer rate parameters in a combustion system with detailed kinetics. The framework introduces a utility function that reflects the expected information gain from a particular experiment. Straightforward evaluation (and maximization) of this utility function requires Monte Carlo sampling, which is infeasible with computationally intensive models. Instead, we construct a polynomial surrogate for the dependence of experimental observables on model parameters and design conditions, with the help of dimension-adaptive sparse quadrature. Results demonstrate the efficiency and accuracy of the surrogate, as well as the considerable effectiveness of the experimental design framework in choosing informative experimental conditions.

  8. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  9. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...

  10. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  11. The maximal acceleration group

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1984-01-01

    On the basis of the maximal proper acceleration relative to the vacuum, a line element and differential geometry of eight-dimensional phase space are constructed. The maximal acceleration group is defined as the mathematical transformation group under which the eight dimensional line element is invariant. In the classical limit of vanishing Planck's constant, the maximal acceleration group and geometry reduce to those of general relativity

  12. Maximization, learning, and economic behavior.

    Science.gov (United States)

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  13. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  14. Entropy, Information Theory, Information Geometry and Bayesian Inference in Data, Signal and Image Processing and Inverse Problems

    Directory of Open Access Journals (Sweden)

    Ali Mohammad-Djafari

    2015-06-01

    Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.

  15. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Joint Iterative Carrier Synchronization and Signal Detection Employing Expectation Maximization

    DEFF Research Database (Denmark)

    Zibar, Darko; de Carvalho, Luis Henrique Hecker; Estaran Tolosa, Jose Manuel

    2014-01-01

    . The algorithm is tested in a mixed line rate optical transmission scenario employing dual polarization 448 Gb/s 16-QAM signal surrounded by eight on-off keying channels in a 50 GHz grid. It is shown that joint carrier synchronization and data detection are more robust towards optical transmitter impairments...

  17. Concurrent Cognitive Mapping and Localization Using Expectation Maximization

    National Research Council Canada - National Science Library

    Laviers, Kennard

    2004-01-01

    ...) views the world as a series of connected spaces. These spaces are initially mapped as an occupancy grid in a room-by-room fashion using a modified version of the Histogram In Motion Mapping (HIMM...

  18. Two Expectation-Maximization Algorithms for Boolean Factor Analysis

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Polyakov, P.Y.

    2014-01-01

    Roč. 130, 23 April (2014), s. 83-97 ISSN 0925-2312 R&D Projects: GA ČR GAP202/10/0262 Grant - others:GA MŠk(CZ) ED1.1.00/02.0070; GA MŠk(CZ) EE.2.3.20.0073 Program:ED Institutional research plan: CEZ:AV0Z10300504 Keywords : Boolean Factor analysis * Binary Matrix factorization * Neural networks * Binary data model * Dimension reduction * Bars problem Subject RIV: IN - Informatics, Computer Science Impact factor: 2.083, year: 2014

  19. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  20. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  1. Unequal Expectations

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    the impact of educational tracking on expectation formation among high school students in the U.S., whereas Chapter III analyzes the role of the grade point average in the expectation formation process among socially disadvantaged high school students in the U.S. The empirical analyses in both chapters...

  2. Bayesian estimation of regularization and atlas building in diffeomorphic image registration.

    Science.gov (United States)

    Zhang, Miaomiao; Singh, Nikhil; Fletcher, P Thomas

    2013-01-01

    This paper presents a generative Bayesian model for diffeomorphic image registration and atlas building. We develop an atlas estimation procedure that simultaneously estimates the parameters controlling the smoothness of the diffeomorphic transformations. To achieve this, we introduce a Monte Carlo Expectation Maximization algorithm, where the expectation step is approximated via Hamiltonian Monte Carlo sampling on the manifold of diffeomorphisms. An added benefit of this stochastic approach is that it can successfully solve difficult registration problems involving large deformations, where direct geodesic optimization fails. Using synthetic data generated from the forward model with known parameters, we demonstrate the ability of our model to successfully recover the atlas and regularization parameters. We also demonstrate the effectiveness of the proposed method in the atlas estimation problem for 3D brain images.

  3. Optimal projection of observations in a Bayesian setting

    KAUST Repository

    Giraldi, Loic

    2018-03-18

    Optimal dimensionality reduction methods are proposed for the Bayesian inference of a Gaussian linear model with additive noise in presence of overabundant data. Three different optimal projections of the observations are proposed based on information theory: the projection that minimizes the Kullback–Leibler divergence between the posterior distributions of the original and the projected models, the one that minimizes the expected Kullback–Leibler divergence between the same distributions, and the one that maximizes the mutual information between the parameter of interest and the projected observations. The first two optimization problems are formulated as the determination of an optimal subspace and therefore the solution is computed using Riemannian optimization algorithms on the Grassmann manifold. Regarding the maximization of the mutual information, it is shown that there exists an optimal subspace that minimizes the entropy of the posterior distribution of the reduced model; a basis of the subspace can be computed as the solution to a generalized eigenvalue problem; an a priori error estimate on the mutual information is available for this particular solution; and that the dimensionality of the subspace to exactly conserve the mutual information between the input and the output of the models is less than the number of parameters to be inferred. Numerical applications to linear and nonlinear models are used to assess the efficiency of the proposed approaches, and to highlight their advantages compared to standard approaches based on the principal component analysis of the observations.

  4. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    Science.gov (United States)

    Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan

    2017-12-01

    As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  5. Inferring the most probable maps of underground utilities using Bayesian mapping model

    Science.gov (United States)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  6. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    Directory of Open Access Journals (Sweden)

    Kevin B. Wynne

    2017-12-01

    Full Text Available As the use of Digital Micro Mirror Devices (DMDs becomes more prevalent in optics research, the ability to precisely locate the Fourier “footprint” of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam’s characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  7. Bayesian evaluation of inequality constrained hypotheses

    NARCIS (Netherlands)

    Gu, X.; Mulder, J.; Deković, M.; Hoijtink, H.

    2014-01-01

    Bayesian evaluation of inequality constrained hypotheses enables researchers to investigate their expectations with respect to the structure among model parameters. This article proposes an approximate Bayes procedure that can be used for the selection of the best of a set of inequality constrained

  8. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  9. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    , and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its...

  10. Evolutionary Expectations

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    , they are correlated among people who share environments because these individuals satisfice within their cognitive bounds by using cues in order of validity, as opposed to using cues arbitrarily. Any difference in expectations thereby arise from differences in cognitive ability, because two individuals with identical......The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover...... cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary...

  11. Unequal Expectations

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    In this dissertation I examine the relationship between subjective beliefs about the outcomes of educational choices and the generation of inequality of educational opportunity (IEO) in post-industrial society. Taking my departure in the rational action turn in the sociology of educational...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...... for their educational futures. Focusing on the causes rather than the consequences of educational expectations, I argue that students shape their expectations in response to the signals about their academic performance they receive from institutionalized performance indicators in schools. Chapter II considers...

  12. Spin and Maximal Acceleration

    Directory of Open Access Journals (Sweden)

    Giorgio Papini

    2017-12-01

    Full Text Available We study the spin current tensor of a Dirac particle at accelerations close to the upper limit introduced by Caianiello. Continual interchange between particle spin and angular momentum is possible only when the acceleration is time-dependent. This represents a stringent limit on the effect that maximal acceleration may have on spin physics in astrophysical applications. We also investigate some dynamical consequences of maximal acceleration.

  13. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  14. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  15. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  16. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  17. Utility maximization under solvency constraints and unhedgeable risks

    NARCIS (Netherlands)

    Kleinow, T.; Pelsser, A.

    2008-01-01

    We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a

  18. From Wald to Savage: homo economicus becomes a Bayesian statistician.

    Science.gov (United States)

    Giocoli, Nicola

    2013-01-01

    Bayesian rationality is the paradigm of rational behavior in neoclassical economics. An economic agent is deemed rational when she maximizes her subjective expected utility and consistently revises her beliefs according to Bayes's rule. The paper raises the question of how, when and why this characterization of rationality came to be endorsed by mainstream economists. Though no definitive answer is provided, it is argued that the question is of great historiographic importance. The story begins with Abraham Wald's behaviorist approach to statistics and culminates with Leonard J. Savage's elaboration of subjective expected utility theory in his 1954 classic The Foundations of Statistics. The latter's acknowledged fiasco to achieve a reinterpretation of traditional inference techniques along subjectivist and behaviorist lines raises the puzzle of how a failed project in statistics could turn into such a big success in economics. Possible answers call into play the emphasis on consistency requirements in neoclassical theory and the impact of the postwar transformation of U.S. business schools. © 2012 Wiley Periodicals, Inc.

  19. Simulation-based optimal Bayesian experimental design for nonlinear systems

    KAUST Repository

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  20. Great Expectations

    NARCIS (Netherlands)

    Dickens, Charles

    2005-01-01

    One of Dickens's most renowned and enjoyable novels, Great Expectations tells the story of Pip, an orphan boy who wishes to transcend his humble origins and finds himself unexpectedly given the opportunity to live a life of wealth and respectability. Over the course of the tale, in which Pip

  1. Bayesian analysis of MEG visual evoked responses

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, D.M.; George, J.S.; Wood, C.C.

    1999-04-01

    The authors developed a method for analyzing neural electromagnetic data that allows probabilistic inferences to be drawn about regions of activation. The method involves the generation of a large number of possible solutions which both fir the data and prior expectations about the nature of probable solutions made explicit by a Bayesian formalism. In addition, they have introduced a model for the current distributions that produce MEG and (EEG) data that allows extended regions of activity, and can easily incorporate prior information such as anatomical constraints from MRI. To evaluate the feasibility and utility of the Bayesian approach with actual data, they analyzed MEG data from a visual evoked response experiment. They compared Bayesian analyses of MEG responses to visual stimuli in the left and right visual fields, in order to examine the sensitivity of the method to detect known features of human visual cortex organization. They also examined the changing pattern of cortical activation as a function of time.

  2. Bayesian data analysis for newcomers.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  3. Experiments expectations

    OpenAIRE

    Gorini, B; Meschi, E

    2014-01-01

    This paper presents the expectations and the constraints of the experiments relatively to the commissioning procedure and the running conditions for the 2015 data taking period. The views about the various beam parameters for the p-p period, like beam energy, maximum pileup, bunch spacing and luminosity limitation in IP2 and IP8, are discussed. The goals and the constraints of the 2015 physics program are also presented, including the heavy ions period as well as the special...

  4. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  5. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  6. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  7. Practical Bayesian tomography

    Science.gov (United States)

    Granade, Christopher; Combes, Joshua; Cory, D. G.

    2016-03-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  8. Variational Bayesian Filtering

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2008-01-01

    Roč. 56, č. 10 (2008), s. 5020-5030 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian filtering * particle filtering * Variational Bayes Subject RIV: BC - Control Systems Theory Impact factor: 2.335, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/smidl-variational bayesian filtering.pdf

  9. Bayesian Networks An Introduction

    CERN Document Server

    Koski, Timo

    2009-01-01

    Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include:.: An introduction to Dirichlet Distribution, Exponential Families and their applications.; A detailed description of learni

  10. Inverse Problems in a Bayesian Setting

    KAUST Repository

    Matthies, Hermann G.

    2016-02-13

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)—the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. We give a detailed account of this approach via conditional approximation, various approximations, and the construction of filters. Together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non-linear Bayesian update in form of a filter is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisation to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and nonlinear Bayesian update in form of a filter on some examples.

  11. A Bayesian Hybrid Adaptive Randomisation Design for Clinical Trials with Survival Outcomes.

    Science.gov (United States)

    Moatti, M; Chevret, S; Zohar, S; Rosenberger, W F

    2016-01-01

    Response-adaptive randomisation designs have been proposed to improve the efficiency of phase III randomised clinical trials and improve the outcomes of the clinical trial population. In the setting of failure time outcomes, Zhang and Rosenberger (2007) developed a response-adaptive randomisation approach that targets an optimal allocation, based on a fixed sample size. The aim of this research is to propose a response-adaptive randomisation procedure for survival trials with an interim monitoring plan, based on the following optimal criterion: for fixed variance of the estimated log hazard ratio, what allocation minimizes the expected hazard of failure? We demonstrate the utility of the design by redesigning a clinical trial on multiple myeloma. To handle continuous monitoring of data, we propose a Bayesian response-adaptive randomisation procedure, where the log hazard ratio is the effect measure of interest. Combining the prior with the normal likelihood, the mean posterior estimate of the log hazard ratio allows derivation of the optimal target allocation. We perform a simulation study to assess and compare the performance of this proposed Bayesian hybrid adaptive design to those of fixed, sequential or adaptive - either frequentist or fully Bayesian - designs. Non informative normal priors of the log hazard ratio were used, as well as mixture of enthusiastic and skeptical priors. Stopping rules based on the posterior distribution of the log hazard ratio were computed. The method is then illustrated by redesigning a phase III randomised clinical trial of chemotherapy in patients with multiple myeloma, with mixture of normal priors elicited from experts. As expected, there was a reduction in the proportion of observed deaths in the adaptive vs. non-adaptive designs; this reduction was maximized using a Bayes mixture prior, with no clear-cut improvement by using a fully Bayesian procedure. The use of stopping rules allows a slight decrease in the observed

  12. Sparse Variational Bayesian SAGE Algorithm With Application to the Estimation of Multipath Wireless Channels

    DEFF Research Database (Denmark)

    Shutin, Dmitriy; Fleury, Bernard Henri

    2011-01-01

    parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression......-invariant channels. The algorithm is also applied to real measurement data in a multiple-input-multiple-output (MIMO) time-invariant context.......In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...

  13. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    -valued models, this paper proposes a GSM model - the Bessel K model - that induces concave penalty functions for the estimation of complex sparse signals. The properties of the Bessel K model are analyzed when it is applied to Type I and Type II estimation. This analysis reveals that, by tuning the parameters...... of the mixing pdf different penalty functions are invoked depending on the estimation type used, the value of the noise variance, and whether real or complex signals are estimated. Using the Bessel K model, we derive a sparse estimator based on a modification of the expectation-maximization algorithm formulated......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...

  14. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  15. Non-linear Bayesian update of PCE coefficients

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(?), a measurement operator Y (u(q), q), where u(q, ?) uncertain solution. Aim: to identify q(?). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(!) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a unctional approximation, e.g. polynomial chaos expansion (PCE). New: We apply Bayesian update to the PCE coefficients of the random coefficient q(?) (not to the probability density function of q).

  16. Multisnapshot Sparse Bayesian Learning for DOA

    DEFF Research Database (Denmark)

    Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki

    2016-01-01

    The directions of arrival (DOA) of plane waves are estimated from multisnapshot sensor array data using sparse Bayesian learning (SBL). The prior for the source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters, the unknown variances (i.e., the source...... powers). For a complex Gaussian likelihood with hyperparameter, the unknown noise variance, the corresponding Gaussian posterior distribution is derived. The hyperparameters are automatically selected by maximizing the evidence and promoting sparse DOA estimates. The SBL scheme for DOA estimation...

  17. The R Package MitISEM: Efficient and Robust Simulation Procedures for Bayesian Inference

    Directory of Open Access Journals (Sweden)

    Nalan Baştürk

    2017-07-01

    Full Text Available This paper presents the R package MitISEM (mixture of t by importance sampling weighted expectation maximization which provides an automatic and flexible two-stage method to approximate a non-elliptical target density kernel - typically a posterior density kernel - using an adaptive mixture of Student t densities as approximating density. In the first stage a mixture of Student t densities is fitted to the target using an expectation maximization algorithm where each step of the optimization procedure is weighted using importance sampling. In the second stage this mixture density is a candidate density for efficient and robust application of importance sampling or the Metropolis-Hastings (MH method to estimate properties of the target distribution. The package enables Bayesian inference and prediction on model parameters and probabilities, in particular, for models where densities have multi-modal or other non-elliptical shapes like curved ridges. These shapes occur in research topics in several scientific fields. For instance, analysis of DNA data in bio-informatics, obtaining loans in the banking sector by heterogeneous groups in financial economics and analysis of education's effect on earned income in labor economics. The package MitISEM provides also an extended algorithm, 'sequential MitISEM', which substantially decreases computation time when the target density has to be approximated for increasing data samples. This occurs when the posterior or predictive density is updated with new observations and/or when one computes model probabilities using predictive likelihoods. We illustrate the MitISEM algorithm using three canonical statistical and econometric models that are characterized by several types of non-elliptical posterior shapes and that describe well-known data patterns in econometrics and finance. We show that MH using the candidate density obtained by MitISEM outperforms, in terms of numerical efficiency, MH using a simpler

  18. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  19. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...

  20. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  1. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  2. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    Science.gov (United States)

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  3. Bayesian Decision Support

    Science.gov (United States)

    Berliner, M.

    2017-12-01

    Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.

  4. Bayesian Exploratory Factor Analysis

    Science.gov (United States)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  5. Thermodynamically consistent Bayesian analysis of closed biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-11-01

    Full Text Available Abstract Background Estimating the rate constants of a biochemical reaction system with known stoichiometry from noisy time series measurements of molecular concentrations is an important step for building predictive models of cellular function. Inference techniques currently available in the literature may produce rate constant values that defy necessary constraints imposed by the fundamental laws of thermodynamics. As a result, these techniques may lead to biochemical reaction systems whose concentration dynamics could not possibly occur in nature. Therefore, development of a thermodynamically consistent approach for estimating the rate constants of a biochemical reaction system is highly desirable. Results We introduce a Bayesian analysis approach for computing thermodynamically consistent estimates of the rate constants of a closed biochemical reaction system with known stoichiometry given experimental data. Our method employs an appropriately designed prior probability density function that effectively integrates fundamental biophysical and thermodynamic knowledge into the inference problem. Moreover, it takes into account experimental strategies for collecting informative observations of molecular concentrations through perturbations. The proposed method employs a maximization-expectation-maximization algorithm that provides thermodynamically feasible estimates of the rate constant values and computes appropriate measures of estimation accuracy. We demonstrate various aspects of the proposed method on synthetic data obtained by simulating a subset of a well-known model of the EGF/ERK signaling pathway, and examine its robustness under conditions that violate key assumptions. Software, coded in MATLAB®, which implements all Bayesian analysis techniques discussed in this paper, is available free of charge at http://www.cis.jhu.edu/~goutsias/CSS%20lab/software.html. Conclusions Our approach provides an attractive statistical methodology for

  6. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  7. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  8. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  9. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  10. Robust modelling of solubility in supercritical carbon dioxide using Bayesian methods.

    Science.gov (United States)

    Tarasova, Anna; Burden, Frank; Gasteiger, Johann; Winkler, David A

    2010-04-01

    Two sparse Bayesian methods were used to derive predictive models of solubility of organic dyes and polycyclic aromatic compounds in supercritical carbon dioxide (scCO(2)), over a wide range of temperatures (285.9-423.2K) and pressures (60-1400 bar): a multiple linear regression employing an expectation maximization algorithm and a sparse prior (MLREM) method and a non-linear Bayesian Regularized Artificial Neural Network with a Laplacian Prior (BRANNLP). A randomly selected test set was used to estimate the predictive ability of the models. The MLREM method resulted in a model of similar predictivity to the less sparse MLR method, while the non-linear BRANNLP method created models of substantially better predictivity than either the MLREM or MLR based models. The BRANNLP method simultaneously generated context-relevant subsets of descriptors and a robust, non-linear quantitative structure-property relationship (QSPR) model for the compound solubility in scCO(2). The differences between linear and non-linear descriptor selection methods are discussed. (c) 2009 Elsevier Inc. All rights reserved.

  11. Bayesian optimization for materials science

    CERN Document Server

    Packwood, Daniel

    2017-01-01

    This book provides a short and concise introduction to Bayesian optimization specifically for experimental and computational materials scientists. After explaining the basic idea behind Bayesian optimization and some applications to materials science in Chapter 1, the mathematical theory of Bayesian optimization is outlined in Chapter 2. Finally, Chapter 3 discusses an application of Bayesian optimization to a complicated structure optimization problem in computational surface science. Bayesian optimization is a promising global optimization technique that originates in the field of machine learning and is starting to gain attention in materials science. For the purpose of materials design, Bayesian optimization can be used to predict new materials with novel properties without extensive screening of candidate materials. For the purpose of computational materials science, Bayesian optimization can be incorporated into first-principles calculations to perform efficient, global structure optimizations. While re...

  12. MAXIMS VIOLATIONS IN LITERARY WORK

    Directory of Open Access Journals (Sweden)

    Widya Hanum Sari Pertiwi

    2015-12-01

    Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims

  13. Can natural selection encode Bayesian priors?

    Science.gov (United States)

    Ramírez, Juan Camilo; Marshall, James A R

    2017-08-07

    The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also

  14. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study the ...

  15. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization.......In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  16. Bayesian coronal seismology

    Science.gov (United States)

    Arregui, Iñigo

    2018-01-01

    In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.

  17. Bayesian community detection.

    Science.gov (United States)

    Mørup, Morten; Schmidt, Mikkel N

    2012-09-01

    Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled.

  18. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  19. Bayesian Hypothesis Testing

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, Stephen A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sigeti, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-15

    These are a set of slides about Bayesian hypothesis testing, where many hypotheses are tested. The conclusions are the following: The value of the Bayes factor obtained when using the median of the posterior marginal is almost the minimum value of the Bayes factor. The value of τ2 which minimizes the Bayes factor is a reasonable choice for this parameter. This allows a likelihood ratio to be computed with is the least favorable to H0.

  20. Bayesian networks in reliability

    Energy Technology Data Exchange (ETDEWEB)

    Langseth, Helge [Department of Mathematical Sciences, Norwegian University of Science and Technology, N-7491 Trondheim (Norway)]. E-mail: helgel@math.ntnu.no; Portinale, Luigi [Department of Computer Science, University of Eastern Piedmont ' Amedeo Avogadro' , 15100 Alessandria (Italy)]. E-mail: portinal@di.unipmn.it

    2007-01-15

    Over the last decade, Bayesian networks (BNs) have become a popular tool for modelling many kinds of statistical problems. We have also seen a growing interest for using BNs in the reliability analysis community. In this paper we will discuss the properties of the modelling framework that make BNs particularly well suited for reliability applications, and point to ongoing research that is relevant for practitioners in reliability.

  1. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....

  2. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  3. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  4. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  5. Learning Functions and Approximate Bayesian Computation Design: ABCD

    Directory of Open Access Journals (Sweden)

    Markus Hainy

    2014-08-01

    Full Text Available A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.

  6. A mixture copula Bayesian network model for multimodal genomic data

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2017-04-01

    Full Text Available Gaussian Bayesian networks have become a widely used framework to estimate directed associations between joint Gaussian variables, where the network structure encodes the decomposition of multivariate normal density into local terms. However, the resulting estimates can be inaccurate when the normality assumption is moderately or severely violated, making it unsuitable for dealing with recent genomic data such as the Cancer Genome Atlas data. In the present paper, we propose a mixture copula Bayesian network model which provides great flexibility in modeling non-Gaussian and multimodal data for causal inference. The parameters in mixture copula functions can be efficiently estimated by a routine expectation–maximization algorithm. A heuristic search algorithm based on Bayesian information criterion is developed to estimate the network structure, and prediction can be further improved by the best-scoring network out of multiple predictions from random initial values. Our method outperforms Gaussian Bayesian networks and regular copula Bayesian networks in terms of modeling flexibility and prediction accuracy, as demonstrated using a cell signaling data set. We apply the proposed methods to the Cancer Genome Atlas data to study the genetic and epigenetic pathways that underlie serous ovarian cancer.

  7. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  8. Bayesian analysis in plant pathology.

    Science.gov (United States)

    Mila, A L; Carriquiry, A L

    2004-09-01

    ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.

  9. The Bayesian Approach to Association

    Science.gov (United States)

    Arora, N. S.

    2017-12-01

    The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this

  10. A Primer on Bayesian Decision Analysis With an Application to a Personalized Kidney Transplant Decision

    Science.gov (United States)

    Neapolitan, Richard; Jiang, Xia; Ladner, Daniela P.; Kaplan, Bruce

    2016-01-01

    To provide personalized medicine, we not only must determine the treatments and other decisions most likely to be effective for a patient, but also consider the patient’s tradeoff between possible benefits of therapy versus possible loss of quality of life. There are numerous studies indicating that various treatments can negatively affect quality of life. Even if we have all information available for a given patient, it is an arduous task to amass the information to reach a decision that maximizes the utility of the decision to the patient. A clinical decision support system (CDSS) is a computer program, which is designed to assist healthcare professionals with decision making tasks. By utilizing emerging large datasets, we hold promise for developing CDSSs that can predict how treatments and other decisions can affect outcomes. However, we need to go beyond that; namely our CDSS needs to account for the extent to which these decisions can affect quality of life. This manuscript provides an introduction to developing CDSSs using Bayesian networks and influence diagrams. Such CDSSs are able to recommend decisions that maximize the expected utility of the predicted outcomes to the patient. By way of comparison, we examine the benefit and challenges of the Kidney Donor Risk Index (KDRI) as a decision support tool, and we discuss several difficulties with this index. Most importantly, the KDRI does not provide a measure of the expected quality of life if the kidney is accepted versus the expected quality of life if the patient stays on dialysis. Finally, we develop a schema for an influence diagram that models the kidney transplant decision, and show how the influence diagram approach can resolve these difficulties and provide the clinician and the potential transplant recipient with a valuable decision support tool. PMID:26900809

  11. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  12. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  13. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to

  14. Bayesian Data Analysis (lecture 1)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.

  15. Bayesian Data Analysis (lecture 2)

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    framework but we will also go into more detail and discuss for example the role of the prior. The second part of the lecture will cover further examples and applications that heavily rely on the bayesian approach, as well as some computational tools needed to perform a bayesian analysis.

  16. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  17. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  18. Bayesian inference with ecological applications

    CERN Document Server

    Link, William A

    2009-01-01

    This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...

  19. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  20. A maximal atmospheric mixing from a maximal CP violating phase

    Energy Technology Data Exchange (ETDEWEB)

    Masina, Isabella [Centro Studi e Ricerche ' E. Fermi' , Via Panisperna 89/A, Rome (Italy) and INFN, Sezione di Roma, P.le A. Moro 2, Rome (Italy)]. E-mail: isabella.masina@roma1.infn.it

    2006-02-02

    We point out an elegant mechanism to predict a maximal atmospheric angle, which is based on a maximal CP violating phase difference between second and third lepton families in the flavour symmetry basis. In this framework, a discussion of the general formulas for {theta}{sub 12}, vertical bar U{sub e3} vertical bar, {delta} and their possible correlations in some limiting cases is provided. We also present an explicit realisation in terms of an SO(3) flavour symmetry model.

  1. Sparse Bayesian Learning Based Three-Dimensional Imaging Algorithm for Off-Grid Air Targets in MIMO Radar Array

    Directory of Open Access Journals (Sweden)

    Zekun Jiao

    2018-02-01

    Full Text Available In recent years, the development of compressed sensing (CS and array signal processing provides us with a broader perspective of 3D imaging. The CS-based imaging algorithms have a better performance than traditional methods. In addition, the sparse array can overcome the limitation of aperture size and number of antennas. Since the signal to be reconstructed is sparse for air targets, many CS-based imaging algorithms using a sparse array are proposed. However, most of those algorithms assume that the scatterers are exactly located at the pre-discretized grids, which will not hold in real scene. Aiming at finding an accurate solution to off-grid target imaging, we propose an off-grid 3D imaging method based on improved sparse Bayesian learning (SBL. Besides, the Bayesian Cramér-Rao Bound (BCRB for off-grid bias estimator is provided. Different from previous algorithms, the proposed algorithm adopts a three-stage hierarchical sparse prior to introduce more degrees of freedom. Then variational expectation maximization method is applied to solve the sparse recovery problem through iteration, during each iteration joint sparsity is used to improve efficiency. Experimental results not only validate that the proposed method outperforms the existing off-grid imaging methods in terms of accuracy and resolution, but have compared the root mean square error with corresponding BCRB, proving effectiveness of the proposed method.

  2. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  3. Bayesian supervised dimensionality reduction.

    Science.gov (United States)

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  4. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...... locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...... parameter values are unknown. The results show that in this situation a wide range of interpoint distances should be included in the design, and the widely used regular design is often not the best choice....

  5. A Bayesian approach to person perception.

    Science.gov (United States)

    Clifford, C W G; Mareschal, I; Otsuka, Y; Watson, T L

    2015-11-01

    Here we propose a Bayesian approach to person perception, outlining the theoretical position and a methodological framework for testing the predictions experimentally. We use the term person perception to refer not only to the perception of others' personal attributes such as age and sex but also to the perception of social signals such as direction of gaze and emotional expression. The Bayesian approach provides a formal description of the way in which our perception combines current sensory evidence with prior expectations about the structure of the environment. Such expectations can lead to unconscious biases in our perception that are particularly evident when sensory evidence is uncertain. We illustrate the ideas with reference to our recent studies on gaze perception which show that people have a bias to perceive the gaze of others as directed towards themselves. We also describe a potential application to the study of the perception of a person's sex, in which a bias towards perceiving males is typically observed. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Bayesian adaptive methods for clinical trials

    CERN Document Server

    Berry, Scott M; Muller, Peter

    2010-01-01

    Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer's disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISP...

  7. Maximization

    OpenAIRE

    A. Garmroodi Asil; A. Shahsavand; Sh. Mirzaei

    2017-01-01

    Over 60% of Iranian natural gases are contaminated with hydrogen sulfide or other sulfur compounds. Khangiran refinery which receives around 50 MMSCMD sour gas with 3.35 mol% H2S as its GTU feed, produces around 45% of Iranian sulfur production. Three of the four existing sulfur recovery units (SRU’s) are initially installed more than 3 decades ago. Such relatively old Claus units with no tail gas clean up facility have usually sulfur recovery efficiencies as low as 90%, due to the low H2S co...

  8. Maximization

    Directory of Open Access Journals (Sweden)

    A. Garmroodi Asil

    2017-09-01

    To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.

  9. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  10. Bayesian Inference: with ecological applications

    Science.gov (United States)

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  11. Bayesian image restoration, using configurations

    OpenAIRE

    Thorarinsdottir, Thordis

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the re...

  12. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    so different from the classical trajectories that they are sometimes called surrealistic [4]!. The answer to the first problem above is in the construction of maximally realistic quan- tum mechanics [5,6] which treats position and momentum symmetrically. We present in this paper the answer to the second problem in the form of a ...

  13. Principles of maximally classical and maximally realistic quantum

    Indian Academy of Sciences (India)

    Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive ...

  14. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive ...

  15. Variational Bayesian Inference of Line Spectra

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Hansen, Thomas Lundgaard; Fleury, Bernard Henri

    2017-01-01

    ) of the frequencies and computing expectations over them. Thus, we additionally capture and operate with the uncertainty of the frequency estimates. Aiming to maximize the model evidence, variational optimization provides analytic approximations of the posterior pdfs and also gives estimates of the additional...... parameters. We propose an accurate representation of the pdfs of the frequencies by mixtures of von Mises pdfs, which yields closed-form expectations. We define the algorithm VALSE in which the estimates of the pdfs and parameters are iteratively updated. VALSE is a gridless, convergent method, does...

  16. Bayesian image reconstruction for improving detection performance of muon tomography.

    Science.gov (United States)

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  17. Bayesian statistics and Monte Carlo methods

    Science.gov (United States)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  18. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes...... in the suffix tree that have a superprimitive path-label....

  19. A new prior for bayesian anomaly detection: application to biosurveillance.

    Science.gov (United States)

    Shen, Y; Cooper, G F

    2010-01-01

    Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the

  20. Minimum mean square error estimation and approximation of the Bayesian update

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.

  1. Bayesian Networks as a Decision Tool for O&M of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Nielsen, Jannie Jessen; Sørensen, John Dalsgaard

    2010-01-01

    Costs to operation and maintenance (O&M) of offshore wind turbines are large. This paper presents how influence diagrams can be used to assist in rational decision making for O&M. An influence diagram is a graphical representation of a decision tree based on Bayesian Networks. Bayesian Networks...... offer efficient Bayesian updating of a damage model when imperfect information from inspections/monitoring is available. The extension to an influence diagram offers the calculation of expected utilities for decision alternatives, and can be used to find the optimal strategy among different alternatives...

  2. Bayesian estimation of isotopic age differences

    International Nuclear Information System (INIS)

    Curl, R.L.

    1988-01-01

    Isotopic dating is subject to uncertainties arising from counting statistics and experimental errors. These uncertainties are additive when an isotopic age difference is calculated. If large, they can lead to no significant age difference by classical statistics. In many cases, relative ages are known because of stratigraphic order or other clues. Such information can be used to establish a Bayes estimate of age difference which will include prior knowledge of age order. Age measurement errors are assumed to be log-normal and a noninformative but constrained bivariate prior for two true ages in known order is adopted. True-age ratio is distributed as a truncated log-normal variate. Its expected value gives an age-ratio estimate, and its variance provides credible intervals. Bayesian estimates of ages are different and in correct order even if measured ages are identical or reversed in order. For example, age measurements on two samples might both yield 100 ka with coefficients of variation of 0.2. Bayesian estimates are 22.7 ka for age difference with a 75% credible interval of [4.4, 43.7] ka

  3. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  4. Estimating Unobservable Inflation Expectations in the New Keynesian Phillips Curve

    Directory of Open Access Journals (Sweden)

    Francesca Rondina

    2018-02-01

    Full Text Available This paper uses an econometric model and Bayesian estimation to reverse engineer the path of inflation expectations implied by the New Keynesian Phillips Curve and the data. The estimated expectations roughly track the patterns of a number of common measures of expected inflation available from surveys or computed from financial data. In particular, they exhibit the strongest correlation with the inflation forecasts of the respondents in the University of Michigan Survey of Consumers. The estimated model also shows evidence of the anchoring of long run inflation expectations to a value that is in the range of the target inflation rate.

  5. Fast Bayesian optimal experimental design and its applications

    KAUST Repository

    Long, Quan

    2015-01-07

    We summarize our Laplace method and multilevel method of accelerating the computation of the expected information gain in a Bayesian Optimal Experimental Design (OED). Laplace method is a widely-used method to approximate an integration in statistics. We analyze this method in the context of optimal Bayesian experimental design and extend this method from the classical scenario, where a single dominant mode of the parameters can be completely-determined by the experiment, to the scenarios where a non-informative parametric manifold exists. We show that by carrying out this approximation the estimation of the expected Kullback-Leibler divergence can be significantly accelerated. While Laplace method requires a concentration of measure, multi-level Monte Carlo method can be used to tackle the problem when there is a lack of measure concentration. We show some initial results on this approach. The developed methodologies have been applied to various sensor deployment problems, e.g., impedance tomography and seismic source inversion.

  6. Prior expectations facilitate metacognition for perceptual decision.

    Science.gov (United States)

    Sherman, M T; Seth, A K; Barrett, A B; Kanai, R

    2015-09-01

    The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Case studies in Bayesian microbial risk assessments

    Directory of Open Access Journals (Sweden)

    Turner Joanne

    2009-12-01

    Full Text Available Abstract Background The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. Methods We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs. Results We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5. The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11. In the second

  8. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Safety culture in Bayesian and legal contexts

    International Nuclear Information System (INIS)

    Krug, H.E.P. Jr.

    1992-01-01

    While contemplating the similarities between the law of torts and concepts of safety, the author realized that there was a close correspondence between the law of negligence and the way safety ought to be generally defined. This definition of safety is provided herein. A safety culture must have an adequate definition of safety in order to function most effectively. This paper provides a practical definition of safety that answers the question 'How safe is safe enough? The development rests on two bases: the subjectivistic-Bayesian definition of probability and certain legal definitions primarily from the tort law of negligence. The development also leads to the conclusion that one cannot generally expect greater specificity in determining how safe is safe enough than one finds in the legal definition of liability under the tort of negligence. It then follows that some of the public's aversion to complex technical undertakings is rooted in its typically intuitive and vague notions concerning safety

  10. Bayesian multitask classification with Gaussian process priors.

    Science.gov (United States)

    Skolidis, Grigorios; Sanguinetti, Guido

    2011-12-01

    We present a novel approach to multitask learning in classification problems based on Gaussian process (GP) classification. The method extends previous work on multitask GP regression, constraining the overall covariance (across tasks and data points) to factorize as a Kronecker product. Fully Bayesian inference is possible but time consuming using sampling techniques. We propose approximations based on the popular variational Bayes and expectation propagation frameworks, showing that they both achieve excellent accuracy when compared to Gibbs sampling, in a fraction of time. We present results on a toy dataset and two real datasets, showing improved performance against the baseline results obtained by learning each task independently. We also compare with a recently proposed state-of-the-art approach based on support vector machines, obtaining comparable or better results.

  11. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  12. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  13. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  14. Analysis of human and organizational factors that influence mining accidents based on Bayesian network.

    Science.gov (United States)

    Mirzaei Aliabadi, Mostafa; Aghaei, Hamed; Kalatpour, Omid; Soltanian, Ali Reza; Nikravesh, Asghar

    2018-03-21

    The present study was aimed to analyze human and organizational factors involved in mining accidents and determine the relationships among these factors. In this study, Human Factors Analysis and Classification System (HFACS) with Bayesian network (BN) were combined in order to analyze contributing factors in mining accidents. BN was constructed based on a hierarchal structure of HFACS. The required data were collected from a total of 295 cases of Iranian mining accidents and analyzed using HFACS. Afterwards, prior probability of contributing factors was computed using the expectation-maximization algorithm. Sensitivity analysis was applied to determine which contributing factor had a higher influence on unsafe acts to select the best intervention strategy. The analyses showed that skill based errors, routine violations, environmental factors, and planned inappropriate operation had a higher relative importance in the accidents. Moreover, sensitivity analysis revealed that environmental factors, failed to correct known problem, and personnel factors had a higher influence on unsafe acts. The results of the present study could provide guidance to help safety and health management by adopting proper intervention strategies to reduce mining accidents.

  15. Semi-Supervised Bayesian Classification of Materials with Impact-Echo Signals

    Directory of Open Access Journals (Sweden)

    Jorge Igual

    2015-05-01

    Full Text Available The detection and identification of internal defects in a material require the use of some technology that translates the hidden interior damages into observable signals with different signature-defect correspondences. We apply impact-echo techniques for this purpose. The materials are classified according to their defective status (homogeneous, one defect or multiple defects and kind of defect (hole or crack, passing through or not. Every specimen is impacted by a hammer, and the spectrum of the propagated wave is recorded. This spectrum is the input data to a Bayesian classifier that is based on the modeling of the conditional probabilities with a mixture of Gaussians. The parameters of the Gaussian mixtures and the class probabilities are estimated using an extended expectation-maximization algorithm. The advantage of our proposal is that it is flexible, since it obtains good results for a wide range of models even under little supervision; e.g., it obtains a harmonic average of precision and recall value of 92.38% given only a 10% supervision ratio. We test the method with real specimens made of aluminum alloy. The results show that the algorithm works very well. This technique could be applied in many industrial problems, such as the optimization of the marble cutting process.

  16. Fuzzy CMAC With incremental Bayesian Ying-Yang learning and dynamic rule construction.

    Science.gov (United States)

    Nguyen, M N

    2010-04-01

    Inspired by the philosophy of ancient Chinese Taoism, Xu's Bayesian ying-yang (BYY) learning technique performs clustering by harmonizing the training data (yang) with the solution (ying). In our previous work, the BYY learning technique was applied to a fuzzy cerebellar model articulation controller (FCMAC) to find the optimal fuzzy sets; however, this is not suitable for time series data analysis. To address this problem, we propose an incremental BYY learning technique in this paper, with the idea of sliding window and rule structure dynamic algorithms. Three contributions are made as a result of this research. First, an online expectation-maximization algorithm incorporated with the sliding window is proposed for the fuzzification phase. Second, the memory requirement is greatly reduced since the entire data set no longer needs to be obtained during the prediction process. Third, the rule structure dynamic algorithm with dynamically initializing, recruiting, and pruning rules relieves the "curse of dimensionality" problem that is inherent in the FCMAC. Because of these features, the experimental results of the benchmark data sets of currency exchange rates and Mackey-Glass show that the proposed model is more suitable for real-time streaming data analysis.

  17. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  18. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  19. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary...... configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...

  20. Bayesian variable selection in regression

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T.J.; Beauchamp, J.J.

    1987-01-01

    This paper is concerned with the selection of subsets of ''predictor'' variables in a linear regression model for the prediction of a ''dependent'' variable. We take a Bayesian approach and assign a probability distribution to the dependent variable through a specification of prior distributions for the unknown parameters in the regression model. The appropriate posterior probabilities are derived for each submodel and methods are proposed for evaluating the family of prior distributions. Examples are given that show the application of the Bayesian methodology. 23 refs., 3 figs.

  1. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees a...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....... and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...

  2. Bayesian estimation of observation error covariance matrix in the equatorial Pacific

    Science.gov (United States)

    Ueno, G.

    2016-02-01

    We develop a Bayesian technique for estimating the parameters in the observation noise covariance matrix Rt for ensemble data assimilation. We design a posterior distribution by using the ensemble-approximated likelihood and a Wishart prior distribution and present an iterative algorithm for parameter estimation. The present algorithm is identified as the expectation-maximization (EM) algorithm for a Gaussian mixture model and can estimate a number of parameters in Rt. The algorithm is an extension of that by Ueno and Nakamura (2014) for maximum-likelihood estimation. An advantage of the proposed method is that Rt can be estimated online, and more importantly, the temporal smoothness of Rt can be controlled by adequately choosing two parameters of the prior distribution, the covariance matrix S and the number of degrees of freedom ν. The parameters S and ν may vary with the time at which Rt is estimated. The ν parameter can be objectively estimated by maximizing the marginal likelihood. The present formalism can handle cases in which the number of data points or the data positions varies with time, the former case of which is exemplified in the experiments. We present an application to a coupled atmosphere-ocean model under each of the following assumptions: Rt is a scalar multiple of a fixed matrix (Rt=αtΣ, where αt is the scalar parameter and Σ is the fixed matrix), Rt is a diagonal matrix, Rt has fixed eigenvectors, or Rt has no specific structure. We verify that the proposed algorithm works well and that only a limited number of iterations are necessary. When Rt has one of the structures mentioned above, by assuming the prior covariance matrix to be the previous estimate, namely S=\\hat{R}t-1, we obtain the Bayesian estimate of Rt that varies smoothly in time compared to the maximum-likelihood estimate at each time. When Rt has no specific structure, we need to regularize S=\\hat{R}t-1 to maintain the positive-definiteness of S. Through twin experiments

  3. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  4. Bayesian variable selection for multistate Markov models with interval-censored data in an ecological momentary assessment study of smoking cessation.

    Science.gov (United States)

    Koslovsky, Matthew D; Swartz, Michael D; Chan, Wenyaw; Leon-Novelo, Luis; Wilkinson, Anna V; Kendzor, Darla E; Businelle, Michael S

    2017-10-11

    The application of sophisticated analytical methods to intensive longitudinal data, collected with ecological momentary assessments (EMA), has helped researchers better understand smoking behaviors after a quit attempt. Unfortunately, the wealth of information captured with EMAs is typically underutilized in practice. Thus, novel methods are needed to extract this information in exploratory research studies. One of the main objectives of intensive longitudinal data analysis is identifying relations between risk factors and outcomes of interest. Our goal is to develop and apply expectation maximization variable selection for Bayesian multistate Markov models with interval-censored data to generate new insights into the relation between potential risk factors and transitions between smoking states. Through simulation, we demonstrate the effectiveness of our method in identifying associated risk factors and its ability to outperform the LASSO in a special case. Additionally, we use the expectation conditional-maximization algorithm to simplify estimation, a deterministic annealing variant to reduce the algorithm's dependence on starting values, and Louis's method to estimate unknown parameter uncertainty. We then apply our method to intensive longitudinal data collected with EMA to identify risk factors associated with transitions between smoking states after a quit attempt in a cohort of socioeconomically disadvantaged smokers who were interested in quitting. © 2017, The International Biometric Society.

  5. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  6. The humble Bayesian : Model checking from a fully Bayesian perspective

    NARCIS (Netherlands)

    Morey, Richard D.; Romeijn, Jan-Willem; Rouder, Jeffrey N.

    Gelman and Shalizi (2012) criticize what they call the usual story in Bayesian statistics: that the distribution over hypotheses or models is the sole means of statistical inference, thus excluding model checking and revision, and that inference is inductivist rather than deductivist. They present

  7. Best Practice Life Expectancy

    DEFF Research Database (Denmark)

    Medford, Anthony

    2017-01-01

    probability estimates of best-practice life expectancy levels or make projections about future maximum life expectancy. Comments: Our findings may be useful for policymakers and insurance/pension analysts who would like to obtain estimates and probabilities of future maximum life expectancies.......Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has...... been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value theory. Methods: Extreme value distributions are fit to the time series (1900 to 2012) of maximum life...

  8. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  9. Bayesian models in cognitive neuroscience: A tutorial

    NARCIS (Netherlands)

    O'Reilly, J.X.; Mars, R.B.

    2015-01-01

    This chapter provides an introduction to Bayesian models and their application in cognitive neuroscience. The central feature of Bayesian models, as opposed to other classes of models, is that Bayesian models represent the beliefs of an observer as probability distributions, allowing them to

  10. A Bayesian framework for risk perception

    NARCIS (Netherlands)

    van Erp, H.R.N.

    2017-01-01

    We present here a Bayesian framework of risk perception. This framework encompasses plausibility judgments, decision making, and question asking. Plausibility judgments are modeled by way of Bayesian probability theory, decision making is modeled by way of a Bayesian decision theory, and relevancy

  11. [Manufactured baby food: safety expectations].

    Science.gov (United States)

    Davin, L; Van Egroo, L-D; Galesne, N

    2010-12-01

    Food safety is a concern for parents of infants, and healthcare professionals are often questioned by them about this topic. Baby food European regulation ensures high levels of safety and is more rigorous than common food regulation. Maximal limit for pesticides in baby food demonstrates the high level of requirements. This limit must be below the 10 ppb detection threshold, whatever the chemical used. Other contaminants such as nitrates are also the subject of greater expectations in baby food. Food safety risks control needs a specific know-how that baby food manufacturers have acquired and experienced, more particularly by working with producers of high quality raw material. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  12. Expected Net Present Value, Expected Net Future Value, and the Ramsey Rule

    OpenAIRE

    Gollier, Christian

    2009-01-01

    Weitzman (1998) showed that when future interest rates are uncertain, using the expected net present value implies a term structure of discount rates that is decreasing to the smallest possible interest rate. On the contrary, using the expected net future value criteria implies an increasing term structure of discount rates up to the largest possible interest rate. We reconcile the two approaches by introducing risk aversion and utility maximization. We show that if the aggregate consumption ...

  13. Bayesian policy reuse

    CSIR Research Space (South Africa)

    Rosman, Benjamin

    2016-02-01

    Full Text Available instance is the accumulated discounted reward, Upi = ∑k i=0 γ iri, with k being the length of the episode and ri being the reward received at step i. We refer to U pi generated from a policy pi in a task instance simply as the policy’s performance. Solving... an MDP µ is to acquire an optimal policy pi∗ = arg maxpi Upi which maximises the total expected return of µ. For a reinforcement learning agent, T and R are typically unknown. We denote a collection of policies possessed by the agent by Π, and refer...

  14. Differentiated Bayesian Conjoint Choice Designs

    NARCIS (Netherlands)

    Z. Sándor (Zsolt); M. Wedel (Michel)

    2003-01-01

    textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about

  15. Bayesian networks in levee reliability

    NARCIS (Netherlands)

    Roscoe, K.; Hanea, A.

    2015-01-01

    We applied a Bayesian network to a system of levees for which the results of traditional reliability analysis showed high failure probabilities, which conflicted with the intuition and experience of those managing the levees. We made use of forty proven strength observations - high water levels with

  16. Bayesian Classification of Image Structures

    DEFF Research Database (Denmark)

    Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert

    2009-01-01

    In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic...... dimensionality. The built classi er is tested on standard and non-standard images...

  17. Computational Neuropsychology and Bayesian Inference.

    Science.gov (United States)

    Parr, Thomas; Rees, Geraint; Friston, Karl J

    2018-01-01

    Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.

  18. Bayesian Alternation During Tactile Augmentation

    Directory of Open Access Journals (Sweden)

    Caspar Mathias Goeke

    2016-10-01

    Full Text Available A large number of studies suggest that the integration of multisensory signals by humans is well described by Bayesian principles. However, there are very few reports about cue combination between a native and an augmented sense. In particular, we asked the question whether adult participants are able to integrate an augmented sensory cue with existing native sensory information. Hence for the purpose of this study we build a tactile augmentation device. Consequently, we compared different hypotheses of how untrained adult participants combine information from a native and an augmented sense. In a two-interval forced choice (2 IFC task, while subjects were blindfolded and seated on a rotating platform, our sensory augmentation device translated information on whole body yaw rotation to tactile stimulation. Three conditions were realized: tactile stimulation only (augmented condition, rotation only (native condition, and both augmented and native information (bimodal condition. Participants had to choose one out of two consecutive rotations with higher angular rotation. For the analysis, we fitted the participants’ responses with a probit model and calculated the just notable difference (JND. Then we compared several models for predicting bimodal from unimodal responses. An objective Bayesian alternation model yielded a better prediction (χred2 = 1.67 than the Bayesian integration model (χred2= 4.34. Slightly higher accuracy showed a non-Bayesian winner takes all model (χred2= 1.64, which either used only native or only augmented values per subject for prediction. However the performance of the Bayesian alternation model could be substantially improved (χred2= 1.09 utilizing subjective weights obtained by a questionnaire. As a result, the subjective Bayesian alternation model predicted bimodal performance most accurately among all tested models. These results suggest that information from augmented and existing sensory modalities in

  19. Maximal abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  20. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  1. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  2. Bayesian exploration of recent Chilean earthquakes

    Science.gov (United States)

    Duputel, Zacharie; Jiang, Junle; Jolivet, Romain; Simons, Mark; Rivera, Luis; Ampuero, Jean-Paul; Liang, Cunren; Agram, Piyush; Owen, Susan; Ortega, Francisco; Minson, Sarah

    2016-04-01

    The South-American subduction zone is an exceptional natural laboratory for investigating the behavior of large faults over the earthquake cycle. It is also a playground to develop novel modeling techniques combining different datasets. Coastal Chile was impacted by two major earthquakes in the last two years: the 2015 M 8.3 Illapel earthquake in central Chile and the 2014 M 8.1 Iquique earthquake that ruptured the central portion of the 1877 seismic gap in northern Chile. To gain better understanding of the distribution of co-seismic slip for those two earthquakes, we derive joint kinematic finite fault models using a combination of static GPS offsets, radar interferograms, tsunami measurements, high-rate GPS waveforms and strong motion data. Our modeling approach follows a Bayesian formulation devoid of a priori smoothing thereby allowing us to maximize spatial resolution of the inferred family of models. The adopted approach also attempts to account for major sources of uncertainty in the Green's functions. The results reveal different rupture behaviors for the 2014 Iquique and 2015 Illapel earthquakes. The 2014 Iquique earthquake involved a sharp slip zone and did not rupture to the trench. The 2015 Illapel earthquake nucleated close to the coast and propagated toward the trench with significant slip apparently reaching the trench or at least very close to the trench. At the inherent resolution of our models, we also present the relationship of co-seismic models to the spatial distribution of foreshocks, aftershocks and fault coupling models.

  3. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  4. Polytomies and Bayesian phylogenetic inference.

    Science.gov (United States)

    Lewis, Paul O; Holder, Mark T; Holsinger, Kent E

    2005-04-01

    Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short branch lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

  5. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Maximizing carbon storage in the Appalachians: A method for considering the risk of disturbance events

    Science.gov (United States)

    Michael R. Vanderberg; Kevin Boston; John. Bailey

    2011-01-01

    Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...

  7. Effects of feedback and educational Training on Maximization in choice Tasks: Experimental-game Evidence

    NARCIS (Netherlands)

    Antonides, G.; Maital, S.

    2002-01-01

    Compelling evidence exists that behavior is inconsistent with the assumptions of expected-utility maximization. However, if learning occurs, then maximization may take place asymptotically (albeit slowly). But a series of experiments by Herrnstein and his associates show that under very general

  8. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  9. Constrained information maximization by free energy minimization

    Science.gov (United States)

    Kamimura, Ryotaro

    2011-10-01

    In this paper we introduce free energy-based methods to constrain mutual information maximization, developed to realize competitive learning. The new method is introduced to simplify the computational procedures of mutual information and to improve the fidelity of representation and to stabilize learning. First, the free energy is effective in simplifying the computation procedures of mutual information because we need not directly compute mutual information, which needs heavy computation, but only deals with partition functions. With partition functions, computational complexity is significantly reduced. Second, fidelity to input patterns can be improved because training errors between input patterns and connection weights are implicitly incorporated. This means that mutual information is maximized under the constraint of the errors between input patterns and connection weights. Finally, learning can be stabilized in our approach. One of the problems of the free energy approach is that learning processes should be carefully controlled to keep its stability. The present paper shows that the conventional computational techniques in the field of self-organizing maps are really effective in controlling the processes. In particular, minimum information production learning can be used further to stabilize learning by decreasing information obtained at each learning step as much as possible. Thus, we can expect that our new method can be used to increase mutual information between competitive units and input patterns without decreasing errors between input patterns and connection weights and with stabilized learning processes. We applied the free energy-based models to the well-known Iris problem and a student survey, and succeeded in improving the performance in terms of classification rates. In addition, the minimum information production learning turned out to be effective in stabilizing learning.

  10. Determining health expectancies

    National Research Council Canada - National Science Library

    Robine, Jean-Marie

    2003-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jean-Marie Robine 9 1 Increase in Life Expectancy and Concentration of Ages at Death . . . . France Mesle´ and Jacques Vallin 13 2 Compression of Morbidity...

  11. Humans expect generosity

    Science.gov (United States)

    Brañas-Garza, Pablo; Rodríguez-Lara, Ismael; Sánchez, Angel

    2017-02-01

    Mechanisms supporting human ultra-cooperativeness are very much subject to debate. One psychological feature likely to be relevant is the formation of expectations, particularly about receiving cooperative or generous behavior from others. Without such expectations, social life will be seriously impeded and, in turn, expectations leading to satisfactory interactions can become norms and institutionalize cooperation. In this paper, we assess people’s expectations of generosity in a series of controlled experiments using the dictator game. Despite differences in respective roles, involvement in the game, degree of social distance or variation of stakes, the results are conclusive: subjects seldom predict that dictators will behave selfishly (by choosing the Nash equilibrium action, namely giving nothing). The majority of subjects expect that dictators will choose the equal split. This implies that generous behavior is not only observed in the lab, but also expected by subjects. In addition, expectations are accurate, matching closely the donations observed and showing that as a society we have a good grasp of how we interact. Finally, correlation between expectations and actual behavior suggests that expectations can be an important ingredient of generous or cooperative behavior.

  12. The maximal D = 4 supergravities

    Energy Technology Data Exchange (ETDEWEB)

    Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)

    2007-06-15

    All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.

  13. Maximizing benefits from resource development

    International Nuclear Information System (INIS)

    Skjelbred, B.

    2002-01-01

    The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs

  14. Automatic physical inference with information maximizing neural networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  15. Bayesian exploration for intelligent identification of textures

    Directory of Open Access Journals (Sweden)

    Jeremy A. Fishel

    2012-06-01

    Full Text Available In order to endow robots with humanlike abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac® we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness. Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median=5 and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other

  16. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  17. A Bayesian perspective on age replacement with minimal repair

    International Nuclear Information System (INIS)

    Sheu, S.-H.; Yeh, R.H.; Lin, Y.-B.; Juang, M.-G.

    1999-01-01

    In this article, a Bayesian approach is developed for determining an optimal age replacement policy with minimal repair. By incorporating minimal repair, planned replacement, and unplanned replacement, the mathematical formulas of the expected cost per unit time are obtained for two cases - the infinite-horizon case and the one-replacement-cycle case. For each case, we show that there exists a unique and finite optimal age for replacement under some reasonable conditions. When the failure density is Weibull with uncertain parameters, a Bayesian approach is established to formally express and update the uncertain parameters for determining an optimal age replacement policy. Further, various special cases are discussed in detail. Finally, a numerical example is given

  18. Performance appraisal of expectations

    Directory of Open Access Journals (Sweden)

    Russkikh G.A.

    2016-11-01

    Full Text Available this article provides basic concepts for teachers to estimate and reach planned students’ expectations, describes functions and elements of expectations; nature of external and internal estimate, technology to estimate the results, gives recommendations how to create diagnostic assignments.

  19. Marijuana: College Students' Expectations.

    Science.gov (United States)

    Rumstein, Regina

    1980-01-01

    Focused on college students' expectations about marijuana. Undergraduates (N=210) expected marijuana to have sedating effects; they largely discounted psychological consequences. Students considered marijuana to be an educational issue and favored decriminalization of the drug. Users, occasional users, and nonusers differed significantly in…

  20. Query complexity in expectation

    NARCIS (Netherlands)

    Kaniewski, J.; Lee, T.; de Wolf, R.; Halldórsson, M.M.; Iwama, K.; Kobayashi, N.; Speckmann, B.

    2015-01-01

    We study the query complexity of computing a function f:{0,1}n→R+ in expectation. This requires the algorithm on input x to output a nonnegative random variable whose expectation equals f(x), using as few queries to the input x as possible. We exactly characterize both the randomized and the quantum

  1. Life Expectancy in 2040

    DEFF Research Database (Denmark)

    Canudas-Romo, Vladimir; DuGoff, Eva H; Wu, Albert W.

    2016-01-01

    expectancy at age 20 will increase by approximately one year per decade for females and males between now and 2040. According to the clinical experts, 70% of the improvement in life expectancy will occur in cardiovascular disease and cancer, while in the last 30 years most of the improvement has occurred...

  2. Zweifache Stichprobenpruefplaene fuer Qualitative und Quantitative Markmale mit Minimaler Maximaler ASN (Double Sample Plans for Qualitative and Quantitative Features with Minimal Maximal Average Sample Number (ASN))

    National Research Council Canada - National Science Library

    Mueller, Kai

    1998-01-01

    ... with minimal maximal Average Sampler Number (ASN) in chapter three. Chapter four features single and double variable sample plans and, as expected, chapter five provides the determination of these plans with minimal maximal ASN...

  3. Pedestrian dynamics via Bayesian networks

    Science.gov (United States)

    Venkat, Ibrahim; Khader, Ahamad Tajudin; Subramanian, K. G.

    2014-06-01

    Studies on pedestrian dynamics have vital applications in crowd control management relevant to organizing safer large scale gatherings including pilgrimages. Reasoning pedestrian motion via computational intelligence techniques could be posed as a potential research problem within the realms of Artificial Intelligence. In this contribution, we propose a "Bayesian Network Model for Pedestrian Dynamics" (BNMPD) to reason the vast uncertainty imposed by pedestrian motion. With reference to key findings from literature which include simulation studies, we systematically identify: What are the various factors that could contribute to the prediction of crowd flow status? The proposed model unifies these factors in a cohesive manner using Bayesian Networks (BNs) and serves as a sophisticated probabilistic tool to simulate vital cause and effect relationships entailed in the pedestrian domain.

  4. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...

  5. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  6. Bayesian Inference on Proportional Elections

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  7. Bayesian analyses of cognitive architecture.

    Science.gov (United States)

    Houpt, Joseph W; Heathcote, Andrew; Eidels, Ami

    2017-06-01

    The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  9. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  10. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  11. Multiview Bayesian Correlated Component Analysis

    DEFF Research Database (Denmark)

    Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai

    2015-01-01

    are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....

  12. Reliability analysis with Bayesian networks

    OpenAIRE

    Zwirglmaier, Kilian Martin

    2017-01-01

    Bayesian networks (BNs) represent a probabilistic modeling tool with large potential for reliability engineering. While BNs have been successfully applied to reliability engineering, there are remaining issues, some of which are addressed in this work. Firstly a classification of BN elicitation approaches is proposed. Secondly two approximate inference approaches, one of which is based on discretization and the other one on sampling, are proposed. These approaches are applicable to hybrid/con...

  13. Interim Bayesian Persuasion: First Steps

    OpenAIRE

    Perez, Eduardo

    2015-01-01

    This paper makes a first attempt at building a theory of interim Bayesian persuasion. I work in a minimalist model where a low or high type sender seeks validation from a receiver who is willing to validate high types exclusively. After learning her type, the sender chooses a complete conditional information structure for the receiver from a possibly restricted feasible set. I suggest a solution to this game that takes into account the signaling potential of the sender's choice.

  14. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    . This allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion....... It is shown how half-cell potential measurements may be utilized to update the probability of excessive repair after 50 years....

  15. Computational Neuropsychology and Bayesian Inference

    Directory of Open Access Journals (Sweden)

    Thomas Parr

    2018-02-01

    Full Text Available Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world. This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.

  16. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  17. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  18. Bayesian methods applied to GWAS.

    Science.gov (United States)

    Fernando, Rohan L; Garrick, Dorian

    2013-01-01

    Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.

  19. Predicting Software Test Effort in Iterative Development Using a Dynamic Bayesian Network

    OpenAIRE

    Torkar, Richard; Awan, Nasir Majeed; Alvi, Adnan Khadem; Afzal, Wasif

    2010-01-01

    Projects following iterative software development methodologies must still be managed in a way as to maximize quality and minimize costs. However, there are indications that predicting test effort in iterative development is challenging and currently there seem to be no models for test effort prediction. This paper introduces and validates a dynamic Bayesian network for predicting test effort in iterative software devel- opment. The proposed model is validated by the use of data from two indu...

  20. A Maximally Supersymmetric Kondo Model

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC

    2012-02-17

    We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.

  1. Maximizing the optical network capacity.

    Science.gov (United States)

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-06

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.

  2. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  3. Expect Respect: Healthy Relationships

    Science.gov (United States)

    ... Listen Español Text Size Email Print Share Expect Respect: Healthy Relationships Page Content Article Body Signs of ... good about what happens when they are together. Respect You ask each other what you want to ...

  4. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  5. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  6. 3rd Bayesian Young Statisticians Meeting

    CERN Document Server

    Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra

    2017-01-01

    This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).

  7. Learning dynamic Bayesian networks with mixed variables

    DEFF Research Database (Denmark)

    Bøttcher, Susanne Gammelgaard

    This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learn....... An automated procedure for specifying prior distributions for the parameters in a dynamic Bayesian network is presented. It is a simple extension of the procedure for the ordinary Bayesian networks. Finally the W¨olfer?s sunspot numbers are analyzed....

  8. Expectation propagation for continuous time stochastic processes

    International Nuclear Information System (INIS)

    Cseke, Botond; Schnoerr, David; Sanguinetti, Guido; Opper, Manfred

    2016-01-01

    We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems. (paper)

  9. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    . In a series of Monte Carlo experiments, evidence suggested four main conclusions: (a) efficiency increased when the true variance-covariance matrix became diagonal, (b) EM was more robust to the curse of dimensionality in regard to efficiency and estimation time, (c) EM did not recover the true scale...

  10. Robot Mapping With Real-Time Incremental Localization Using Expectation Maximization

    National Research Council Canada - National Science Library

    Owens, Kevin L

    2005-01-01

    This research effort explores and develops a real-time sonar-based robot mapping and localization algorithm that provides pose correction within the context of a singe room, to be combined with pre...

  11. Two Time Point MS Lesion Segmentation in Brain MRI : An Expectation-Maximization Framework

    NARCIS (Netherlands)

    Jain, Saurabh; Ribbens, Annemie; Sima, Diana M.; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H.; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk

    2016-01-01

    Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint

  12. Nonlinear impairment compensation using expectation maximization for dispersion managed and unmanaged PDM 16-QAM transmission

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    that can be used to compensate for the impairments which have an imprint on a signal constellation, i.e. rotation and distortion of the constellation points. The EM is especially effective for combating non-linear phase noise (NLPN). It is because NLPN severely distorts the signal constellation...

  13. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  14. Collaborative autonomous sensing with Bayesians in the loop

    Science.gov (United States)

    Ahmed, Nisar

    2016-10-01

    There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.

  15. Classical-Equivalent Bayesian Portfolio Optimization for Electricity Generation Planning

    Directory of Open Access Journals (Sweden)

    Hellinton H. Takada

    2018-01-01

    Full Text Available There are several electricity generation technologies based on different sources such as wind, biomass, gas, coal, and so on. The consideration of the uncertainties associated with the future costs of such technologies is crucial for planning purposes. In the literature, the allocation of resources in the available technologies has been solved as a mean-variance optimization problem assuming knowledge of the expected values and the covariance matrix of the costs. However, in practice, they are not exactly known parameters. Consequently, the obtained optimal allocations from the mean-variance optimization are not robust to possible estimation errors of such parameters. Additionally, it is usual to have electricity generation technology specialists participating in the planning processes and, obviously, the consideration of useful prior information based on their previous experience is of utmost importance. The Bayesian models consider not only the uncertainty in the parameters, but also the prior information from the specialists. In this paper, we introduce the classical-equivalent Bayesian mean-variance optimization to solve the electricity generation planning problem using both improper and proper prior distributions for the parameters. In order to illustrate our approach, we present an application comparing the classical-equivalent Bayesian with the naive mean-variance optimal portfolios.

  16. Upper limit for Poisson variable incorporating systematic uncertainties by Bayesian approach

    International Nuclear Information System (INIS)

    Zhu, Yongsheng

    2007-01-01

    To calculate the upper limit for the Poisson observable at given confidence level with inclusion of systematic uncertainties in background expectation and signal efficiency, formulations have been established along the line of Bayesian approach. A FORTRAN program, BPULE, has been developed to implement the upper limit calculation

  17. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  18. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  19. Bayesian flood forecasting methods: A review

    Science.gov (United States)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been

  20. Spiking the expectancy profiles

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter

    Melodic expectations are generated with different degrees of certainty. Given distributions of expectedness ratings for multiple continuations of each context, as obtained with the probe-tone paradigm, this certainty can be quantified in terms of Shannon entropy. Because expectations arise from...... statistical learning, causing comparatively sharper key profiles in musicians, we hypothesised that musical learning can be modelled as a process of entropy reduction through experience. Specifically, implicit learning of statistical regularities allows reduction in the relative entropy (i.e. symmetrised...... Kullback-Leibler or Jensen-Shannon Divergence) between listeners’ prior expectancy profiles and probability distributions of a musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians were revisited. In Experiments 1-2 participants...

  1. The bayesian probabilistic prediction of the next earthquake in the ometepec segment of the mexican subduction zone

    Science.gov (United States)

    Ferraes, Sergio G.

    1992-06-01

    A predictive equation to estimate the next interoccurrence time (τ) for the next earthquake ( M≥6) in the Ometepec segment is presented, based on Bayes' theorem and the Gaussian process. Bayes' theorem is used to relate the Gaussian process to both a log-normal distribution of recurrence times (τ) and a log-normal distribution of magnitudes ( M) ( Nishenko and Buland, 1987; Lomnitz, 1964). We constructed two new random variables X=In M and Y=In τ with normal marginal densities, and based on the Gaussian process model we assume that their joint density is normal. Using this information, we determine the Bayesian conditional probability. Finally, a predictive equation is derived, based on the criterion of maximization of the Bayesian conditional probability. The model forecasts the next interoccurrence time, conditional on the magnitude of the last event. Realistic estimates of future damaging earthquakes are based on relocated historical earthquakes. However, at the present time there is a controversy between Nishenko-Singh and Gonzalez-Ruiz-Mc-Nally concerning the rupturing process of the 1907 earthquake. We use our Bayesian analysis to examine and discuss this very important controversy. To clarify to the full significance of the analysis, we put forward the results using two catalogues: (1) The Ometepec catalogue without the 1907 earthquake (González-Ruíz-McNally), and (2) the Ometepec catalogue including the 1907 earthquake (Nishenko-Singh). The comparison of the prediction error reveals that in the Nishenko-Singh catalogue, the errors are considerably smaller than the average error for the González-Ruíz-McNally catalogue of relocated events. Finally, using the Nishenko-Singh catalogue which locates the 1907 event inside the Ometepec segment, we conclude that the next expected damaging earthquake ( M≥6.0) will occur approximately within the next time interval τ=11.82 years from the last event (which occurred on July 2, 1984), or equivalently will

  2. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  3. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  4. Applying Bayesian Approach to Combinatorial Problem in Chemistry.

    Science.gov (United States)

    Okamoto, Yasuharu

    2017-05-04

    A Bayesian optimization procedure, in combination with density functional theory calculations, was applied to a combinatorial problem in chemistry. As a specific example, we examined the stable structures of lithium-graphite intercalation compounds (Li-GICs). We found that this approach efficiently identified the stable structure of stage-I and -II Li-GICs by calculating 4-6% of the full search space. We expect that this approach will be helpful in solving problems in chemistry that can be regarded as a kind of combinatorial problem.

  5. Bayesian Inference for Structured Spike and Slab Priors

    DEFF Research Database (Denmark)

    Andersen, Michael Riis; Winther, Ole; Hansen, Lars Kai

    2014-01-01

    Sparse signal recovery addresses the problem of solving underdetermined linear inverse problems subject to a sparsity constraint. We propose a novel prior formulation, the structured spike and slab prior, which allows to incorporate a priori knowledge of the sparsity pattern by imposing a spatial...... Gaussian process on the spike and slab probabilities. Thus, prior information on the structure of the sparsity pattern can be encoded using generic covariance functions. Furthermore, we provide a Bayesian inference scheme for the proposed model based on the expectation propagation framework. Using...

  6. The subjectivity of scientists and the Bayesian approach

    CERN Document Server

    Press, James S

    2001-01-01

    Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysisScientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often a

  7. Bayesian and maximum likelihood estimation of genetic maps

    DEFF Research Database (Denmark)

    York, Thomas L.; Durrett, Richard T.; Tanksley, Steven

    2005-01-01

    that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances......, corresponding to the maximum likelihood estimator, performs better than estimators based on the posterior expectation. We also show that while the performance is similar between Mapmaker and the MCMC-based method in the absence of genotyping errors, the MCMC-based method has a distinct advantage in the presence...

  8. Performance expectation plan

    Energy Technology Data Exchange (ETDEWEB)

    Ray, P.E.

    1998-09-04

    This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.

  9. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  10. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  11. Bayesian approaches for Integrated Water Resources Management. A Mediterranean case study.

    Science.gov (United States)

    Gulliver, Zacarías; Herrero, Javier; José Polo, María

    2013-04-01

    This study presents the first steps of a short-term/mid-term analysis of the water resources in the Guadalfeo Basin, Spain. Within the basin the recent construction of the Rules dam has required the development of specific management tools and structures for this water system. The climate variability and the high water demand requirements for agriculture irrigation and tourism in this region may cause different controversies in the water management planning process. During the first stages of the study a rigorous analysis of the Water Framework Directive results was done in order to implement the legal requirements and the solutions for the gaps identified by the water authorities. In addition, the stakeholders and water experts identified the variables and geophysical processes for our specific water system case. These particularities need to be taken into account and are required to be reflected in the final computational tool. For decision making process purposes in a mid-term scale, a bayesian network has been used to quantify uncertainty which also provides a structure representation of probabilities, actions-decisions and utilities. On one hand by applying these techniques it is possible the inclusion of decision rules generating influence diagrams that provides clear and coherent semantics for the value of making an observation. On the other hand the utility nodes encode the stakeholders preferences which are measured on a numerical scale, choosing the action that maximizes the expected utility [MEU]. Also this graphical model allows us to identify gaps and project corrective measures, for example, formulating associated scenarios with different event hypotheses. In this sense conditional probability distributions of the seasonal water demand and waste water has been obtained between the established intervals. This fact will give to the regional water managers useful information for future decision making process. The final display is very visual and allows

  12. Expected Term Structures

    DEFF Research Database (Denmark)

    Buraschi, Andrea; Piatti, Ilaria; Whelan, Paul

    This paper studies the properties of bond risk premia in the cross-section of subjective expectations. We exploit an extensive dataset of yield curve forecasts from financial institutions and document a number of novel findings. First, contrary to evidence presented for stock markets but consiste...... of heterogeneous beliefs is taken into account....

  13. Great Expectations. [Lesson Plan].

    Science.gov (United States)

    Devine, Kelley

    Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…

  14. Parenting with High Expectations

    Science.gov (United States)

    Timperlake, Benna Hull; Sanders, Genelle Timperlake

    2014-01-01

    In some ways raising deaf or hard of hearing children is no different than raising hearing children; expectations must be established and periodically tweaked. Benna Hull Timperlake, who with husband Roger, raised two hearing children in addition to their deaf daughter, Genelle Timperlake Sanders, and Genelle, now a deaf professional, share their…

  15. Light Microscopy at Maximal Precision

    Science.gov (United States)

    Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.

    2017-10-01

    Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.

  16. Robust bayesian inference of generalized Pareto distribution ...

    African Journals Online (AJOL)

    Abstract. In this work, robust Bayesian estimation of the generalized Pareto distribution is proposed. The methodology is presented in terms of oscillation of posterior risks of the Bayesian estimators. By using a Monte Carlo simulation study, we show that, under a suitable generalized loss function, we can obtain a robust ...

  17. Bayesian Decision Theoretical Framework for Clustering

    Science.gov (United States)

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  18. Using Bayesian belief networks in adaptive management.

    Science.gov (United States)

    J.B. Nyberg; B.G. Marcot; R. Sulyma

    2006-01-01

    Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...

  19. Calibration in a Bayesian modelling framework

    NARCIS (Netherlands)

    Jansen, M.J.W.; Hagenaars, T.H.J.

    2004-01-01

    Bayesian statistics may constitute the core of a consistent and comprehensive framework for the statistical aspects of modelling complex processes that involve many parameters whose values are derived from many sources. Bayesian statistics holds great promises for model calibration, provides the

  20. Particle identification in ALICE: a Bayesian approach

    NARCIS (Netherlands)

    Adam, J.; Adamova, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anticic, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshaeuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badala, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnafoeldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielcik, J.; Bielcikova, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boggild, H.; Boldizsar, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossu, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Cortes Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Denes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divia, R.; Djuvsland, O.; Dobrin, A.; Gimenez, D. Domenicis; Doenigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernandez Tellez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhoje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glaessel, P.; Gomez Coral, D. M.; Ramirez, A. Gomez; Gonzalez, A. S.; Gonzalez, V.; Gonzalez-Zamora, P.; Gorbunov, S.; Goerlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Haake, R.; Haaland, O.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbaer, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, J. S.; Kim, M.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein-Boesing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Kralik, I.; Kravcakova, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kucera, V.; Kuijer, P. G.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzon, I. Leon; Leon Vargas, H.; Leoncino, M.; Levai, P.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. Lopez; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mares, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marin, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martinez, M. I.; Garcia, G. Martinez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Perez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miskowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montano Zetina, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Muehlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paic, G.; Pal, S. K.; Pan, J.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Da Costa, H. Pereira; Peresunko, D.; Lara, C. E. Perez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petracek, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Ploskon, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Raesaenen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodriguez Cahuantzi, M.; Manso, A. Rodriguez; Roed, K.; Rogochaya, E.; Rohr, D.; Roehrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Safarik, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Sefcik, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; de Souza, R. D.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Sumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Munoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thaeder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limon, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Voelkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrlakova, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Watanabe, D.; Watanabe, Y.; Weiser, D. F.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yano, S.; Yasin, Z.; Yokoyama, H.; Yoo, I. -K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Zavada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, C.; Zhao, C.; Zhigareva, N.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.; Collaboration, ALICE

    2016-01-01

    We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian

  1. Bayesian Network for multiple hypthesis tracking

    NARCIS (Netherlands)

    Zajdel, W.P.; Kröse, B.J.A.; Blockeel, H.; Denecker, M.

    2002-01-01

    For a flexible camera-to-camera tracking of multiple objects we model the objects behavior with a Bayesian network and combine it with the multiple hypohesis framework that associates observations with objects. Bayesian networks offer a possibility to factor complex, joint distributions into a

  2. Bayesian learning theory applied to human cognition.

    Science.gov (United States)

    Jacobs, Robert A; Kruschke, John K

    2011-01-01

    Probabilistic models based on Bayes' rule are an increasingly popular approach to understanding human cognition. Bayesian models allow immense representational latitude and complexity. Because they use normative Bayesian mathematics to process those representations, they define optimal performance on a given task. This article focuses on key mechanisms of Bayesian information processing, and provides numerous examples illustrating Bayesian approaches to the study of human cognition. We start by providing an overview of Bayesian modeling and Bayesian networks. We then describe three types of information processing operations-inference, parameter learning, and structure learning-in both Bayesian networks and human cognition. This is followed by a discussion of the important roles of prior knowledge and of active learning. We conclude by outlining some challenges for Bayesian models of human cognition that will need to be addressed by future research. WIREs Cogn Sci 2011 2 8-21 DOI: 10.1002/wcs.80 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Properties of the Bayesian Knowledge Tracing Model

    Science.gov (United States)

    van de Sande, Brett

    2013-01-01

    Bayesian Knowledge Tracing is used very widely to model student learning. It comes in two different forms: The first form is the Bayesian Knowledge Tracing "hidden Markov model" which predicts the probability of correct application of a skill as a function of the number of previous opportunities to apply that skill and the model…

  4. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    and secondly, to gain efficiency during modification of an object oriented Bayesian network. To accomplish these two goals we have exploited a mechanism allowing local triangulation of instances to develop a method for updating the junction trees associated with object oriented Bayesian networks in highly...

  5. Using Bayesian Networks to Improve Knowledge Assessment

    Science.gov (United States)

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  6. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  7. Modeling Diagnostic Assessments with Bayesian Networks

    Science.gov (United States)

    Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego

    2007-01-01

    This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…

  8. Iterative sure independence screening EM-Bayesian LASSO algorithm for multi-locus genome-wide association studies

    Science.gov (United States)

    Tamba, Cox Lwaka; Ni, Yuan-Li; Zhang, Yuan-Ming

    2017-01-01

    Genome-wide association study (GWAS) entails examining a large number of single nucleotide polymorphisms (SNPs) in a limited sample with hundreds of individuals, implying a variable selection problem in the high dimensional dataset. Although many single-locus GWAS approaches under polygenic background and population structure controls have been widely used, some significant loci fail to be detected. In this study, we used an iterative modified-sure independence screening (ISIS) approach in reducing the number of SNPs to a moderate size. Expectation-Maximization (EM)-Bayesian least absolute shrinkage and selection operator (BLASSO) was used to estimate all the selected SNP effects for true quantitative trait nucleotide (QTN) detection. This method is referred to as ISIS EM-BLASSO algorithm. Monte Carlo simulation studies validated the new method, which has the highest empirical power in QTN detection and the highest accuracy in QTN effect estimation, and it is the fastest, as compared with efficient mixed-model association (EMMA), smoothly clipped absolute deviation (SCAD), fixed and random model circulating probability unification (FarmCPU), and multi-locus random-SNP-effect mixed linear model (mrMLM). To further demonstrate the new method, six flowering time traits in Arabidopsis thaliana were re-analyzed by four methods (New method, EMMA, FarmCPU, and mrMLM). As a result, the new method identified most previously reported genes. Therefore, the new method is a good alternative for multi-locus GWAS. PMID:28141824

  9. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    Energy Technology Data Exchange (ETDEWEB)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  10. Flexible Bayesian Human Fecundity Models.

    Science.gov (United States)

    Kim, Sungduk; Sundaram, Rajeshwari; Buck Louis, Germaine M; Pyper, Cecilia

    2012-12-01

    Human fecundity is an issue of considerable interest for both epidemiological and clinical audiences, and is dependent upon a couple's biologic capacity for reproduction coupled with behaviors that place a couple at risk for pregnancy. Bayesian hierarchical models have been proposed to better model the conception probabilities by accounting for the acts of intercourse around the day of ovulation, i.e., during the fertile window. These models can be viewed in the framework of a generalized nonlinear model with an exponential link. However, a fixed choice of link function may not always provide the best fit, leading to potentially biased estimates for probability of conception. Motivated by this, we propose a general class of models for fecundity by relaxing the choice of the link function under the generalized nonlinear model framework. We use a sample from the Oxford Conception Study (OCS) to illustrate the utility and fit of this general class of models for estimating human conception. Our findings reinforce the need for attention to be paid to the choice of link function in modeling conception, as it may bias the estimation of conception probabilities. Various properties of the proposed models are examined and a Markov chain Monte Carlo sampling algorithm was developed for implementing the Bayesian computations. The deviance information criterion measure and logarithm of pseudo marginal likelihood are used for guiding the choice of links. The supplemental material section contains technical details of the proof of the theorem stated in the paper, and contains further simulation results and analysis.

  11. Bayesian Nonparametric Longitudinal Data Analysis.

    Science.gov (United States)

    Quintana, Fernando A; Johnson, Wesley O; Waetjen, Elaine; Gold, Ellen

    2016-01-01

    Practical Bayesian nonparametric methods have been developed across a wide variety of contexts. Here, we develop a novel statistical model that generalizes standard mixed models for longitudinal data that include flexible mean functions as well as combined compound symmetry (CS) and autoregressive (AR) covariance structures. AR structure is often specified through the use of a Gaussian process (GP) with covariance functions that allow longitudinal data to be more correlated if they are observed closer in time than if they are observed farther apart. We allow for AR structure by considering a broader class of models that incorporates a Dirichlet Process Mixture (DPM) over the covariance parameters of the GP. We are able to take advantage of modern Bayesian statistical methods in making full predictive inferences and about characteristics of longitudinal profiles and their differences across covariate combinations. We also take advantage of the generality of our model, which provides for estimation of a variety of covariance structures. We observe that models that fail to incorporate CS or AR structure can result in very poor estimation of a covariance or correlation matrix. In our illustration using hormone data observed on women through the menopausal transition, biology dictates the use of a generalized family of sigmoid functions as a model for time trends across subpopulation categories.

  12. BELM: Bayesian extreme learning machine.

    Science.gov (United States)

    Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J

    2011-03-01

    The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.

  13. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  14. Crystal structure prediction accelerated by Bayesian optimization

    Science.gov (United States)

    Yamashita, Tomoki; Sato, Nobuya; Kino, Hiori; Miyake, Takashi; Tsuda, Koji; Oguchi, Tamio

    2018-01-01

    We propose a crystal structure prediction method based on Bayesian optimization. Our method is classified as a selection-type algorithm which is different from evolution-type algorithms such as an evolutionary algorithm and particle swarm optimization. Crystal structure prediction with Bayesian optimization can efficiently select the most stable structure from a large number of candidate structures with a lower number of searching trials using a machine learning technique. Crystal structure prediction using Bayesian optimization combined with random search is applied to known systems such as NaCl and Y2Co17 to discuss the efficiency of Bayesian optimization. These results demonstrate that Bayesian optimization can significantly reduce the number of searching trials required to find the global minimum structure by 30-40% in comparison with pure random search, which leads to much less computational cost.

  15. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  16. Financial Management Practices, Wealth Maximization Criterion and ...

    African Journals Online (AJOL)

    In the field of financial management, shareholders wealth maximization is often seen as the desirable goal not only from the shareholders perspective but for the society at large; with the firm's primary goal aimed mainly at maximizing the wealth of its shareholders. This study thus aimed at determining the impact of the core ...

  17. Maximally differential ideals in regular local rings

    Indian Academy of Sciences (India)

    may either be maximally differential under a set of derivations or a set of higher derivations,. i.e, Hasse–Schmidt derivations. We also extend our result (Theorem 4 of [7]) about the structure of a maximally differ- ential ideal in positive characteristic to unequal characteristic case. 2. Results. By a ring we mean a commutative ...

  18. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  20. Optimal Experimental Design of Borehole Locations for Bayesian Inference of Past Ice Sheet Surface Temperatures

    Science.gov (United States)

    Davis, A. D.; Huan, X.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    Borehole data are essential for calibrating ice sheet models. However, field expeditions for acquiring borehole data are often time-consuming, expensive, and dangerous. It is thus essential to plan the best sampling locations that maximize the value of data while minimizing costs and risks. We present an uncertainty quantification (UQ) workflow based on rigorous probability framework to achieve these objectives. First, we employ an optimal experimental design (OED) procedure to compute borehole locations that yield the highest expected information gain. We take into account practical considerations of location accessibility (e.g., proximity to research sites, terrain, and ice velocity may affect feasibility of drilling) and robustness (e.g., real-time constraints such as weather may force researchers to drill at sub-optimal locations near those originally planned), by incorporating a penalty reflecting accessibility as well as sensitivity to deviations from the optimal locations. Next, we extract vertical temperature profiles from these boreholes and formulate a Bayesian inverse problem to reconstruct past surface temperatures. Using a model of temperature advection/diffusion, the top boundary condition (corresponding to surface temperatures) is calibrated via efficient Markov chain Monte Carlo (MCMC). The overall procedure can then be iterated to choose new optimal borehole locations for the next expeditions.Through this work, we demonstrate powerful UQ methods for designing experiments, calibrating models, making predictions, and assessing sensitivity--all performed under an uncertain environment. We develop a theoretical framework as well as practical software within an intuitive workflow, and illustrate their usefulness for combining data and models for environmental and climate research.

  1. Chinese students' great expectations

    DEFF Research Database (Denmark)

    Thøgersen, Stig

    2013-01-01

    The article focuses on Chinese students' hopes and expectations before leaving to study abroad. The national political environment for their decision to go abroad is shaped by an official narrative of China's transition to a more creative and innovative economy. Students draw on this narrative to...... system, they think of themselves as having a role in the transformation of Chinese attitudes to education and parent-child relations.......The article focuses on Chinese students' hopes and expectations before leaving to study abroad. The national political environment for their decision to go abroad is shaped by an official narrative of China's transition to a more creative and innovative economy. Students draw on this narrative...

  2. Spiking the expectancy profiles

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter

    Melodic expectations have long been quantified using expectedness ratings. Motivated by statistical learning and sharper key profiles in musicians, we model musical learning as a process of reducing the relative entropy between listeners' prior expectancy profiles and probability distributions...... of a given musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians are revisited. Exp. 1-2 used jazz, classical and hymn melodies. Exp. 3-5 collected ratings before and after exposure to 5, 15 or 400 novel melodies generated from...... a finite-state grammar using the Bohlen-Pierce scale. We find group differences in entropy corresponding to degree and relevance of musical training and within-participant decreases after short-term exposure. Thus, whereas inexperienced listeners make high-entropy predictions by default, statistical...

  3. Life expectancy and education

    DEFF Research Database (Denmark)

    Hansen, Casper Worm; Strulik, Holger

    2017-01-01

    This paper exploits the unexpected decline in the death rate from cardiovascular diseases since the 1970s as a large positive health shock that affected predominantly old-age mortality; i.e. the fourth stage of the epidemiological transition. Using a difference-in-differences estimation strategy......, we find that US states with higher mortality rates from cardiovascular disease prior to the 1970s experienced greater increases in adult life expectancy and higher education enrollment. Our estimates suggest that a one-standard deviation higher treatment intensity is associated with an increase...... in adult life expectancy of 0.37 years and 0.07–0.15 more years of higher education....

  4. Reputation and Rational Expectations

    OpenAIRE

    Andersen, Torben; Risager, Ole

    1987-01-01

    The paper considers the importance of reputation in relation to disinflationary policies in a continuous time ration expectations model, where the private sector has incomplete information about the true preferences of the government. It is proved that there is a unique equilibrium with the important property that the costs of disinflation arise in the start of the game where the policy has not yet gained credibility. Published in connection with a visit at the IIES.

  5. Prediction of road accidents: A Bayesian hierarchical approach.

    Science.gov (United States)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H

    2013-03-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any

  6. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  7. Bayesian Approach to Inverse Problems

    CERN Document Server

    2008-01-01

    Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-destructive evaluation. The concept of inverse problems precisely originates from the idea of inverting the laws of physics to recover a quantity of interest from measurable data.Unfortunately, most inverse problems are ill-posed, which means that precise and stable solutions are not easy to devise. Regularization is the key concept to solve inverse problems.The goal of this book is to deal with inverse problems and regularized solutions using the Bayesian statistical tools, with a particular view to signal and image estimation

  8. Bayesian modelling of fusion diagnostics

    Science.gov (United States)

    Fischer, R.; Dinklage, A.; Pasch, E.

    2003-07-01

    Integrated data analysis of fusion diagnostics is the combination of different, heterogeneous diagnostics in order to improve physics knowledge and reduce the uncertainties of results. One example is the validation of profiles of plasma quantities. Integration of different diagnostics requires systematic and formalized error analysis for all uncertainties involved. The Bayesian probability theory (BPT) allows a systematic combination of all information entering the measurement descriptive model that considers all uncertainties of the measured data, calibration measurements, physical model parameters and measurement nuisance parameters. A sensitivity analysis of model parameters allows crucial uncertainties to be found, which has an impact on both diagnostic improvement and design. The systematic statistical modelling within the BPT is used for reconstructing electron density and electron temperature profiles from Thomson scattering data from the Wendelstein 7-AS stellarator. The inclusion of different diagnostics and first-principle information is discussed in terms of improvements.

  9. Bayesian networks in educational assessment

    CERN Document Server

    Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M

    2015-01-01

    Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...

  10. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...... sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning...... under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented on model construction and verification, modeling techniques and tricks, learning...

  11. On Bayesian System Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen Ringi, M.

    1995-05-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person`s state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs.

  12. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  13. Bayesian Kernel Mixtures for Counts.

    Science.gov (United States)

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  14. On Bayesian System Reliability Analysis

    International Nuclear Information System (INIS)

    Soerensen Ringi, M.

    1995-01-01

    The view taken in this thesis is that reliability, the probability that a system will perform a required function for a stated period of time, depends on a person's state of knowledge. Reliability changes as this state of knowledge changes, i.e. when new relevant information becomes available. Most existing models for system reliability prediction are developed in a classical framework of probability theory and they overlook some information that is always present. Probability is just an analytical tool to handle uncertainty, based on judgement and subjective opinions. It is argued that the Bayesian approach gives a much more comprehensive understanding of the foundations of probability than the so called frequentistic school. A new model for system reliability prediction is given in two papers. The model encloses the fact that component failures are dependent because of a shared operational environment. The suggested model also naturally permits learning from failure data of similar components in non identical environments. 85 refs

  15. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...

  16. Referral expectations of radiology

    International Nuclear Information System (INIS)

    Smith, W.L.; Altmaier, E.; Berberoglu, L.; Morris, K.

    1989-01-01

    The expectation of the referring physician are key to developing a successful practice in radiology. Structured interviews with 17 clinicians in both community care and academic practice documented that accuracy of the radiologic report was the single most important factor in clinician satisfaction. Data intercorrelation showed that accuracy of report correlated with frequency of referral (r = .49). Overall satisfaction of the referring physician with radiology correlated with accuracy (r = .69), patient satisfaction (r = .36), and efficiency in archiving (r = .42). These data may be weighted by departmental managers to allocate resources for improving referring physician satisfaction

  17. Bayesian multioutput feedforward neural networks comparison: a conjugate prior approach.

    Science.gov (United States)

    Rossi, Vivien; Vila, Jean-Pierre

    2006-01-01

    A Bayesian method for the comparison and selection of multioutput feedforward neural network topology, based on the predictive capability, is proposed. As a measure of the prediction fitness potential, an expected utility criterion is considered which is consistently estimated by a sample-reuse computation. As opposed to classic point-prediction-based cross-validation methods, this expected utility is defined from the logarithmic score of the neural model predictive probability density. It is shown how the advocated choice of a conjugate probability distribution as prior for the parameters of a competing network, allows a consistent approximation of the network posterior predictive density. A comparison of the performances of the proposed method with the performances of usual selection procedures based on classic cross-validation and information-theoretic criteria, is performed first on a simulated case study, and then on a well known food analysis dataset.

  18. Optimal Experimental Design for Large-Scale Bayesian Inverse Problems

    KAUST Repository

    Ghattas, Omar

    2014-01-06

    We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the pre-exponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental set-ups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

  19. Robust bayesian analysis of an autoregressive model with ...

    African Journals Online (AJOL)

    In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...

  20. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  1. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    2006-01-01

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...... by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose PRIMULA--generated propositional instances have thousands of variables, and whose jointrees have clusters...

  2. A Bayesian approach to meta-analysis of plant pathology studies.

    Science.gov (United States)

    Mila, A L; Ngugi, H K

    2011-01-01

    . Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.

  3. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    International Nuclear Information System (INIS)

    Chen, Peng; Schwab, Christoph

    2016-01-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by the so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data

  4. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  5. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  6. Gender Roles and Expectations

    Directory of Open Access Journals (Sweden)

    Susana A. Eisenchlas

    2013-09-01

    Full Text Available One consequence of the advent of cyber communication is that increasing numbers of people go online to ask for, obtain, and presumably act upon advice dispensed by unknown peers. Just as advice seekers may not have access to information about the identities, ideologies, and other personal characteristics of advice givers, advice givers are equally ignorant about their interlocutors except for the bits of demographic information that the latter may offer freely. In the present study, that information concerns sex. As the sex of the advice seeker may be the only, or the predominant, contextual variable at hand, it is expected that that identifier will guide advice givers in formulating their advice. The aim of this project is to investigate whether and how the sex of advice givers and receivers affects the type of advice, through the empirical analysis of a corpus of web-based Spanish language forums on personal relationship difficulties. The data revealed that, in the absence of individuating information beyond that implicit in the advice request, internalized gender expectations along the lines of agency and communality are the sources from which advice givers draw to guide their counsel. This is despite the trend in discursive practices used in formulating advice, suggesting greater language convergence across sexes.

  7. Maximal supergravities and the E10 model

    International Nuclear Information System (INIS)

    Kleinschmidt, Axel; Nicolai, Hermann

    2006-01-01

    The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture

  8. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  9. Independent Component Analysis by Entropy Maximization (INFOMAX)

    National Research Council Canada - National Science Library

    Garvey, Jennie H

    2007-01-01

    ... (BSS). The Infomax method separates unknown source signals from a number of signal mixtures by maximizing the entropy of a transformed set of signal mixtures and is accomplished by performing gradient ascent in MATLAB...

  10. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  11. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  12. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise.

    Directory of Open Access Journals (Sweden)

    Johannes Burge

    2017-02-01

    Full Text Available Accuracy Maximization Analysis (AMA is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images, a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA's compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA's unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of

  13. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise.

    Science.gov (United States)

    Burge, Johannes; Jaini, Priyank

    2017-02-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA's compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA's unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  14. Analysis of Climate Change on Hydrologic Components by using Bayesian Neural Networks

    Science.gov (United States)

    Kang, K.

    2012-12-01

    Representation of hydrologic analysis in climate change is a challenging task. Hydrologic outputs in regional climate models (RCMs) from general circulation models (GCMs) have difficult representation due to several uncertainties in hydrologic impacts of climate change. To overcome this problem, this research presents practical options for hydrological climate change with Bayesian and Neural networks approached to regional adaption to climate change. Bayesian and Neural networks analysis to climate hydrologic components is one of new frontier researches considering to climate change expectation. Strong advantage in Bayesian Neural networks is detecting time series in hydrologic components, which is complicated due to data, parameter, and model hypothesis on climate change scenario, through changing steps by removing and adding connections in Neural network process that combined Bayesian concept from parameter, predict and update process. As an example study, Mekong River Watershed, which is surrounded by four countries (Myanmar, Laos, Thailand and Cambodia), is selected. Results will show understanding of hydrologic components trend on climate model simulations through Bayesian Neural networks.

  15. Quantum Bayesian rule for weak measurements of qubits in superconducting circuit QED

    International Nuclear Information System (INIS)

    Wang, Peiyue; Qin, Lupei; Li, Xin-Qi

    2014-01-01

    Compared with the quantum trajectory equation (QTE), the quantum Bayesian approach has the advantage of being more efficient to infer a quantum state under monitoring, based on the integrated output of measurements. For weak measurement of qubits in circuit quantum electrodynamics (cQED), properly accounting for the measurement backaction effects within the Bayesian framework is an important problem of current interest. Elegant work towards this task was carried out by Korotkov in ‘bad-cavity’ and weak-response limits (Korotkov 2011 Quantum Bayesian approach to circuit QED measurement (arXiv:1111.4016)). In the present work, based on insights from the cavity-field states (dynamics) and the help of an effective QTE, we generalize the results of Korotkov to more general system parameters. The obtained Bayesian rule is in full agreement with Korotkov's result in limiting cases and as well holds satisfactory accuracy in non-limiting cases in comparison with the QTE simulations. We expect the proposed Bayesian rule to be useful for future cQED measurement and control experiments. (paper)

  16. Bayesian estimation and modeling: Editorial to the second special issue on Bayesian data analysis.

    Science.gov (United States)

    Chow, Sy-Miin; Hoijtink, Herbert

    2017-12-01

    This editorial accompanies the second special issue on Bayesian data analysis published in this journal. The emphases of this issue are on Bayesian estimation and modeling. In this editorial, we outline the basics of current Bayesian estimation techniques and some notable developments in the statistical literature, as well as adaptations and extensions by psychological researchers to better tailor to the modeling applications in psychology. We end with a discussion on future outlooks of Bayesian data analysis in psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Collaborative decision-analytic framework to maximize resilience of tidal marshes to climate change

    Science.gov (United States)

    Thorne, Karen M.; Mattsson, Brady J.; Takekawa, John Y.; Cummings, Jonathan; Crouse, Debby; Block, Giselle; Bloom, Valary; Gerhart, Matt; Goldbeck, Steve; Huning, Beth; Sloop, Christina; Stewart, Mendel; Taylor, Karen; Valoppi, Laura

    2015-01-01

    Decision makers that are responsible for stewardship of natural resources face many challenges, which are complicated by uncertainty about impacts from climate change, expanding human development, and intensifying land uses. A systematic process for evaluating the social and ecological risks, trade-offs, and cobenefits associated with future changes is critical to maximize resilience and conserve ecosystem services. This is particularly true in coastal areas where human populations and landscape conversion are increasing, and where intensifying storms and sea-level rise pose unprecedented threats to coastal ecosystems. We applied collaborative decision analysis with a diverse team of stakeholders who preserve, manage, or restore tidal marshes across the San Francisco Bay estuary, California, USA, as a case study. Specifically, we followed a structured decision-making approach, and we using expert judgment developed alternative management strategies to increase the capacity and adaptability to manage tidal marsh resilience while considering uncertainties through 2050. Because sea-level rise projections are relatively confident to 2050, we focused on uncertainties regarding intensity and frequency of storms and funding. Elicitation methods allowed us to make predictions in the absence of fully compatible models and to assess short- and long-term trade-offs. Specifically we addressed two questions. (1) Can collaborative decision analysis lead to consensus among a diverse set of decision makers responsible for environmental stewardship and faced with uncertainties about climate change, funding, and stakeholder values? (2) What is an optimal strategy for the conservation of tidal marshes, and what strategy is robust to the aforementioned uncertainties? We found that when taking this approach, consensus was reached among the stakeholders about the best management strategies to maintain tidal marsh integrity. A Bayesian decision network revealed that a strategy

  18. Collaborative decision-analytic framework to maximize resilience of tidal marshes to climate change

    Directory of Open Access Journals (Sweden)

    Karen M. Thorne

    2015-03-01

    Full Text Available Decision makers that are responsible for stewardship of natural resources face many challenges, which are complicated by uncertainty about impacts from climate change, expanding human development, and intensifying land uses. A systematic process for evaluating the social and ecological risks, trade-offs, and cobenefits associated with future changes is critical to maximize resilience and conserve ecosystem services. This is particularly true in coastal areas where human populations and landscape conversion are increasing, and where intensifying storms and sea-level rise pose unprecedented threats to coastal ecosystems. We applied collaborative decision analysis with a diverse team of stakeholders who preserve, manage, or restore tidal marshes across the San Francisco Bay estuary, California, USA, as a case study. Specifically, we followed a structured decision-making approach, and we using expert judgment developed alternative management strategies to increase the capacity and adaptability to manage tidal marsh resilience while considering uncertainties through 2050. Because sea-level rise projections are relatively confident to 2050, we focused on uncertainties regarding intensity and frequency of storms and funding. Elicitation methods allowed us to make predictions in the absence of fully compatible models and to assess short- and long-term trade-offs. Specifically we addressed two questions. (1 Can collaborative decision analysis lead to consensus among a diverse set of decision makers responsible for environmental stewardship and faced with uncertainties about climate change, funding, and stakeholder values? (2 What is an optimal strategy for the conservation of tidal marshes, and what strategy is robust to the aforementioned uncertainties? We found that when taking this approach, consensus was reached among the stakeholders about the best management strategies to maintain tidal marsh integrity. A Bayesian decision network revealed that a

  19. Energy providers: customer expectations

    International Nuclear Information System (INIS)

    Pridham, N.F.

    1997-01-01

    The deregulation of the gas and electric power industries, and how it will impact on customer service and pricing rates was discussed. This paper described the present situation, reviewed core competencies, and outlined future expectations. The bottom line is that major energy consumers are very conscious of energy costs and go to great lengths to keep them under control. At the same time, solutions proposed to reduce energy costs must benefit all classes of consumers, be they industrial, commercial, institutional or residential. Deregulation and competition at an accelerated pace is the most likely answer. This may be forced by external forces such as foreign energy providers who are eager to enter the Canadian energy market. It is also likely that the competition and convergence between gas and electricity is just the beginning, and may well be overshadowed by other deregulated industries as they determine their core competencies

  20. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  1. Bayesian analysis for the social sciences

    CERN Document Server

    Jackman, Simon

    2009-01-01

    Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.

  2. Bayesian Statistics の源流

    OpenAIRE

    新家, 健精

    1991-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  3. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  4. Implementing the Bayesian paradigm in risk analysis

    International Nuclear Information System (INIS)

    Aven, T.; Kvaloey, J.T.

    2002-01-01

    The Bayesian paradigm comprises a unified and consistent framework for analyzing and expressing risk. Yet, we see rather few examples of applications where the full Bayesian setting has been adopted with specifications of priors of unknown parameters. In this paper, we discuss some of the practical challenges of implementing Bayesian thinking and methods in risk analysis, emphasizing the introduction of probability models and parameters and associated uncertainty assessments. We conclude that there is a need for a pragmatic view in order to 'successfully' apply the Bayesian approach, such that we can do the assignments of some of the probabilities without adopting the somewhat sophisticated procedure of specifying prior distributions of parameters. A simple risk analysis example is presented to illustrate ideas

  5. A Bayesian concept learning approach to crowdsourcing

    DEFF Research Database (Denmark)

    Viappiani, P.; Zilles, S.; Hamilton, H.J.

    2011-01-01

    We develop a Bayesian approach to concept learning for crowdsourcing applications. A probabilistic belief over possible concept definitions is maintained and updated according to (noisy) observations from experts, whose behaviors are modeled using discrete types. We propose recommendation...

  6. An Intuitive Dashboard for Bayesian Network Inference

    International Nuclear Information System (INIS)

    Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V

    2014-01-01

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++

  7. A Bayesian Network Approach to Ontology Mapping

    National Research Council Canada - National Science Library

    Pan, Rong; Ding, Zhongli; Yu, Yang; Peng, Yun

    2005-01-01

    .... In this approach, the source and target ontologies are first translated into Bayesian networks (BN); the concept mapping between the two ontologies are treated as evidential reasoning between the two translated BNs...

  8. Learning Bayesian networks for discrete data

    KAUST Repository

    Liang, Faming

    2009-02-01

    Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.

  9. Body size and body esteem in women: the mediating role of possible self expectancy.

    Science.gov (United States)

    Dalley, Simon E; Pollet, Thomas V; Vidal, Jose

    2013-06-01

    We predicted that an expectancy of acquiring a feared fat self and an expectancy of acquiring a hoped-for thin self both mediate the impact of body size on women's body esteem. We also predicted that the mediating pathway through the feared fat self would be stronger than that through the hoped-for thin self. A community sample of 251 women reported their age, height, weight, and completed measures of body esteem and expectancy perceptions of acquiring the feared fat and hoped-for thin selves. Bayesian Structural Equation Modeling (SEM) demonstrated that expectancies about the feared fat self and about the hoped-for thin self mediated the relationship between body size and body esteem. Bayesian SEM also revealed that the pathway through the feared fat self was stronger than that through the hoped-for thin self. Implications for future research and the development of eating pathology are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Bayesian networks for management of industrial risk

    International Nuclear Information System (INIS)

    Munteanu, P.; Debache, G.; Duval, C.

    2008-01-01

    This article presents the outlines of Bayesian networks modelling and argues for their interest in the probabilistic studies of industrial risk and reliability. A practical case representative of this type of study is presented in support of the argumentation. The article concludes on some research tracks aiming at improving the performances of the methods relying on Bayesian networks and at widening their application area in risk management. (authors)

  11. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  12. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross- entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  13. Capturing Business Cycles from a Bayesian Viewpoint

    OpenAIRE

    大鋸, 崇

    2011-01-01

    This paper is a survey of empirical studies analyzing business cycles from the perspective of Bayesian econometrics. Kim and Nelson (1998) use a hybrid model; Dynamic factor model of Stock and Watson (1989) and Markov switching model of Hamilton (1989). From the point of view, it is more important dealing with non-linear and non-Gaussian econometric models, recently. Although the classical econometric approaches have difficulty in these models, the Bayesian's do easily. The fact leads heavy u...

  14. Variations on Bayesian Prediction and Inference

    Science.gov (United States)

    2016-05-09

    inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle

  15. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  16. Bayesian Inference of Tumor Hypoxia

    Science.gov (United States)

    Gunawan, R.; Tenti, G.; Sivaloganathan, S.

    2009-12-01

    Tumor hypoxia is a state of oxygen deprivation in tumors. It has been associated with aggressive tumor phenotypes and with increased resistance to conventional cancer therapies. In this study, we report on the application of Bayesian sequential analysis in estimating the most probable value of tumor hypoxia quantification based on immunohistochemical assays of a biomarker. The `gold standard' of tumor hypoxia assessment is a direct measurement of pO2 in vivo by the Eppendorf polarographic electrode, which is an invasive technique restricted to accessible sites and living tissues. An attractive alternative is immunohistochemical staining to detect proteins expressed by cells during hypoxia. Carbonic anhydrase IX (CAIX) is an enzyme expressed on the cell membrane during hypoxia to balance the immediate extracellular microenvironment. CAIX is widely regarded as a surrogate marker of chronic hypoxia in various cancers. The study was conducted with two different experimental procedures. The first data set was a group of three patients with invasive cervical carcinomas, from which five biopsies were obtained. Each of the biopsies was fully sectioned and from each section, the proportion of CAIX-positive cells was estimated. Measurements were made by image analysis of multiple deep sections cut through these biopsies, labeled for CAIX using both immunofluorescence and immunohistochemical techniques [1]. The second data set was a group of 24 patients, also with invasive cervical carcinomas, from which two biopsies were obtained. Bayesian parameter estimation was applied to obtain a reliable inference about the proportion of CAIX-positive cells within the carcinomas, based on the available biopsies. From the first data set, two to three biopsies were found to be sufficient to infer the overall CAIX percentage in the simple form: best estimate±uncertainty. The second data-set led to a similar result in 70% of the cases. In the remaining cases Bayes' theorem warned us

  17. Philosophy and the practice of Bayesian statistics.

    Science.gov (United States)

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. © 2012 The British Psychological Society.

  18. Philosophy and the practice of Bayesian statistics

    Science.gov (United States)

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2015-01-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. PMID:22364575

  19. Who are you expecting? Biases in face perception reveal prior expectations for sex and age.

    Science.gov (United States)

    Watson, Tamara Lea; Otsuka, Yumiko; Clifford, Colin Walter Giles

    2016-01-01

    A person's appearance contains a wealth of information, including indicators of their sex and age. Because first impressions can set the tone of subsequent relationships, it is crucial we form an accurate initial impression. Yet prior expectation can bias our decisions: Studies have reported biases to respond "male" when asked to report a person's sex from an image of their face and to place their age closer to their own. Perceptual expectation effects and cognitive response biases may both contribute to these inaccuracies. The current research used a Bayesian modeling approach to establish the perceptual biases involved when estimating the sex and age of an individual from their face. We demonstrate a perceptual bias for male and older faces evident under conditions of uncertainty. This suggests the well-established male bias is perceptual in origin and may be impervious to cognitive control. In comparison, the own age anchor effect is not operationalized at the perceptual level: The perceptual expectation is for a face of advanced age. Thus, distinct biases in the estimation of age operate at the perceptual and cognitive levels.

  20. Challenges, fears and expectations

    Directory of Open Access Journals (Sweden)

    Jože Urbanija

    1997-01-01

    Full Text Available The author deals with developmental projections in the future. In the light of the accelerated development of civilization periods, we can talk about postinformation era or at least about its characteristics which are directly related to basic questions of life and society and problems of survival concerning library and information sciences. In the frames of globalization and individualization, the users will turn from quantity of information to selective and quality ones.As to the contents, these information will be related to the basic facts of life and society and the problems of survival (globalization on one hand, while on the other, they will be more focused, more confined as to their contents and of higher quality necessary for individual performance and survival (individualization. In the background of these events, even more fears than today will be rising, related to loss of work, loss of professional identity, loss of the sense of meaning, loss of economical and social standard. From the actual situation and from the above mentioned fears expectations will arise. It will be possible to master the quantity of information with information and communication technologies and to enable access to them while human cognitive abilities will represent a bottleneck to their flow also in the future. An important field of the work of librarians in the future will be in the area of interdisciplinary connections with cognitive sciences.

  1. Expectations from ethics

    International Nuclear Information System (INIS)

    Fleming, P.

    2008-01-01

    Prof. Patricia Fleming, centred her presentation on ethical expectations in regulating safety for future generations. The challenge is to find a just solution, one that provides for a defensible approach to inter-generational equity. The question on equity is about whether we are permitted to treat generations differently and to still meet the demands of justice. And the question must be asked regarding these differences: 'in what ways do they make a moral difference?' She asked the question regarding the exact meaning of the ethical principle 'Radioactive waste shall be managed in such a way that predicted impacts on the health of future generations will not be greater than relevant levels of impact that are acceptable today'. Some countries have proposed different standards for different time periods, either implicitly or explicitly. In doing so, have they preserved our standards of justice or have they abandoned them? Prof. Fleming identified six points to provide with some moral maps which might be used to negotiate our way to a just solution to the disposal of nuclear waste. (author)

  2. Expectations and speech intelligibility.

    Science.gov (United States)

    Babel, Molly; Russell, Jamie

    2015-05-01

    Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.

  3. EXONEST: The Bayesian Exoplanetary Explorer

    Directory of Open Access Journals (Sweden)

    Kevin H. Knuth

    2017-10-01

    Full Text Available The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects.

  4. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  5. A Bayesian method for assessing multiscalespecies-habitat relationships

    Science.gov (United States)

    Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.

    2017-01-01

    ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and

  6. Bayesian quantification of thermodynamic uncertainties in dense gas flows

    International Nuclear Information System (INIS)

    Merle, X.; Cinnella, P.

    2015-01-01

    A Bayesian inference methodology is developed for calibrating complex equations of state used in numerical fluid flow solvers. Precisely, the input parameters of three equations of state commonly used for modeling the thermodynamic behavior of the so-called dense gas flows, – i.e. flows of gases characterized by high molecular weights and complex molecules, working in thermodynamic conditions close to the liquid–vapor saturation curve – are calibrated by means of Bayesian inference from reference aerodynamic data for a dense gas flow over a wing section. Flow thermodynamic conditions are such that the gas thermodynamic behavior strongly deviates from that of a perfect gas. In the aim of assessing the proposed methodology, synthetic calibration data – specifically, wall pressure data – are generated by running the numerical solver with a more complex and accurate thermodynamic model. The statistical model used to build the likelihood function includes a model-form inadequacy term, accounting for the gap between the model output associated to the best-fit parameters and the true phenomenon. Results show that, for all of the relatively simple models under investigation, calibrations lead to informative posterior probability density distributions of the input parameters and improve the predictive distribution significantly. Nevertheless, calibrated parameters strongly differ from their expected physical values. The relationship between this behavior and model-form inadequacy is discussed. - Highlights: • Development of a Bayesian inference procedure for calibrating dense-gas flow solvers. • Complex thermodynamic models calibrated by using aerodynamic data for the flow. • Preliminary Sobol analysis used to reduce parameter space. • Piecewise polynomial surrogate model constructed to reduce computational cost. • Calibration results show the crucial role played by model-form inadequacies

  7. Resources and energetics determined dinosaur maximal size.

    Science.gov (United States)

    McNab, Brian K

    2009-07-21

    Some dinosaurs reached masses that were approximately 8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are approximately 22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was approximately 8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass.

  8. A Bayesian approach to solve proton stopping powers from noisy multi-energy CT data.

    Science.gov (United States)

    Lalonde, Arthur; Bär, Esther; Bouchard, Hugo

    2017-10-01

    SPR using up to five energy bins. In terms of range prediction, Bayesian ETD with four energy bins in realistic conditions reduces proton beam range uncertainties by a factor of up to 1.5 compared to ρ e  - Z. The Bayesian ETD is shown to be more robust against noise than similar methods and a promising approach to extract SPR from noisy DECT data. In the advent of commercially available multi-energy CT or photon-counting CT scanners, the Bayesian ETD is expected to allow extracting more information and improve the precision of proton therapy beyond DECT. © 2017 American Association of Physicists in Medicine.

  9. Expected years ever married

    Directory of Open Access Journals (Sweden)

    Ryohei Mogi

    2018-04-01

    Full Text Available Background: In the second half of the 20th century, remarkable marriage changes were seen: a great proportion of never married population, high average age at first marriage, and large variance in first marriage timing. Although it is theoretically possible to separate these three elements, disentangling them analytically remains a challenge. Objective: This study's goal is to answer the following questions: Which of the three effects, nonmarriage, delayed marriage, or expansion, has the most impact on nuptiality changes? How does the most influential factor differ by time periods, birth cohorts, and countries? Methods: To quantify nuptiality changes over time, we define the measure 'expected years ever married' (EYEM. We illustrate the use of EYEM, looking at time trends in 15 countries (six countries for cohort analysis and decompose these trends into three components: scale (the changes in the proportion of never married - nonmarriage, location (the changes in timing of first marriage - delayed marriage, and variance (the changes in the standard deviation of first marriage age - expansion. We used population counts by sex, age, and marital status from national statistical offices and the United Nations database. Results: Results show that delayed marriage is the most influential factor on period EYEM's changes, while nonmarriage has recently begun to contribute to the change in North and West Europe and Canada. Period and cohort analysis complement each other. Conclusions: This study introduces a new index of nuptiality and decomposes its change into the contribution of three components: scale, location, and variance. The decomposition steps presented here offer an open possibility for more elaborate parametric marriage models.

  10. Expectations from Society

    International Nuclear Information System (INIS)

    Blowers, A.

    2008-01-01

    Prof. A. Blowers observed that the social context within which radioactive waste management is considered has evolved over time. The early period where radioactive waste was a non-issue was succeeded by a period of intense conflict over solutions. The contemporary context is more consensual, in which solutions are sought that are both technically sound and socially acceptable. Among the major issues is that of inter-generational equity embraced in the question: how long can or should our responsibility to the future extend? He pointed out the differences in timescales. On the one hand, geo-scientific timescales are very long term, emphasizing the issue of how far into the future it is possible to make predictions about repository safety. By contrast, socio cultural timescales are much shorter, focusing on the foreseeable future of one or two generations and raising the issue of how far into the future we should be concerned. He listed. the primary expectations from society which are: safety and security to alleviate undue burdens to future generations and flexibility in order to enable the future generations to have a stake in decision making. The need to reconcile the two had led to a contemporary emphasis on phased geological disposal incorporating retrievability. However, the long timescales for implementation of disposal provided for sufficient flexibility without the need for retrievability. Future generations would inevitably have sold stake in decision making. Prof. A.. Blowers pointed out that society is also concerned with participation in decision making for implementation. The key elements for success are: openness and transparency, staged process, participation, partnership, benefits to enhance the well being of communities and a democratic framework for decision making, including the ratification of key decisions and the right for communities to withdraw from the process up to a predetermined point. This approach for decision making may also have

  11. Macro Expectations, Aggregate Uncertainty, and Expected Term Premia

    DEFF Research Database (Denmark)

    Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas

    2013-01-01

    Based on individual expectations from the Survey of Professional Forecasters, we construct a realtime proxy for expected term premium changes on long-term bonds. We empirically investigate the relation of these bond term premium expectations with expectations about key macroeconomic variables...... and in ation rates. Expectations about real macroeconomic variables seem to matter more than expectations about nominal factors. Additional findings on term structure factors suggest that the level and slope factor capture information related to uncertainty about real and nominal macroeconomic prospects...

  12. Macro Expectations, Aggregate Uncertainty, and Expected Term Premia

    DEFF Research Database (Denmark)

    Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas

    Based on individual expectations from the Survey of Professional Forecasters, we construct a realtime proxy for expected term premium changes on long-term bonds. We empirically investigate the relation of these bond term premium expectations with expectations about key macroeconomic variables...... and in ation rates. Expectations about real macroeconomic variables seem to matter more than expectations about nominal factors. Additional findings on term structure factors suggest that the level and slope factor capture information related to uncertainty about real and nominal macroeconomic prospects...

  13. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  14. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  15. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  16. Social gradient in life expectancy and health expectancy in Denmark

    DEFF Research Database (Denmark)

    Brønnum-Hansen, Henrik; Andersen, Otto; Kjøller, Mette

    2004-01-01

    Health status of a population can be evaluated by health expectancy expressed as average lifetime in various states of health. The purpose of the study was to compare health expectancy in population groups at high, medium and low educational levels.......Health status of a population can be evaluated by health expectancy expressed as average lifetime in various states of health. The purpose of the study was to compare health expectancy in population groups at high, medium and low educational levels....

  17. Bayesian tomographic reconstruction of microsystems

    International Nuclear Information System (INIS)

    Salem, Sofia Fekih; Vabre, Alexandre; Mohammad-Djafari, Ali

    2007-01-01

    The microtomography by X ray transmission plays an increasingly dominating role in the study and the understanding of microsystems. Within this framework, an experimental setup of high resolution X ray microtomography was developed at CEA-List to quantify the physical parameters related to the fluids flow in microsystems. Several difficulties rise from the nature of experimental data collected on this setup: enhanced error measurements due to various physical phenomena occurring during the image formation (diffusion, beam hardening), and specificities of the setup (limited angle, partial view of the object, weak contrast).To reconstruct the object we must solve an inverse problem. This inverse problem is known to be ill-posed. It therefore needs to be regularized by introducing prior information. The main prior information we account for is that the object is composed of a finite known number of different materials distributed in compact regions. This a priori information is introduced via a Gauss-Markov field for the contrast distributions with a hidden Potts-Markov field for the class materials in the Bayesian estimation framework. The computations are done by using an appropriate Markov Chain Monte Carlo (MCMC) technique.In this paper, we present first the basic steps of the proposed algorithms. Then we focus on one of the main steps in any iterative reconstruction method which is the computation of forward and adjoint operators (projection and backprojection). A fast implementation of these two operators is crucial for the real application of the method. We give some details on the fast computation of these steps and show some preliminary results of simulations

  18. Bayesian noninferiority test for 2 binomial probabilities as the extension of Fisher exact test.

    Science.gov (United States)

    Doi, Masaaki; Takahashi, Fumihiro; Kawasaki, Yohei

    2017-12-30

    Noninferiority trials have recently gained importance for the clinical trials of drugs and medical devices. In these trials, most statistical methods have been used from a frequentist perspective, and historical data have been used only for the specification of the noninferiority margin Δ>0. In contrast, Bayesian methods, which have been studied recently are advantageous in that they can use historical data to specify prior distributions and are expected to enable more efficient decision making than frequentist methods by borrowing information from historical trials. In the case of noninferiority trials for response probabilities π 1 ,π 2 , Bayesian methods evaluate the posterior probability of H 1 :π 1 >π 2 -Δ being true. To numerically calculate such posterior probability, complicated Appell hypergeometric function or approximation methods are used. Further, the theoretical relationship between Bayesian and frequentist methods is unclear. In this work, we give the exact expression of the posterior probability of the noninferiority under some mild conditions and propose the Bayesian noninferiority test framework which can flexibly incorporate historical data by using the conditional power prior. Further, we show the relationship between Bayesian posterior probability and the P value of the Fisher exact test. From this relationship, our method can be interpreted as the Bayesian noninferior extension of the Fisher exact test, and we can treat superiority and noninferiority in the same framework. Our method is illustrated through Monte Carlo simulations to evaluate the operating characteristics, the application to the real HIV clinical trial data, and the sample size calculation using historical data. Copyright © 2017 John Wiley & Sons, Ltd.

  19. A Bayesian inference approach to unveil supply curves in electricity markets

    DEFF Research Database (Denmark)

    Mitridati, Lesia Marie-Jeanne Mariane; Pinson, Pierre

    2017-01-01

    With increased competition in wholesale electricity markets, the need for new decision-making tools for strategic producers has arisen. Optimal bidding strategies have traditionally been modeled as stochastic profit maximization problems. However, for producers with non-negligible market power...... of information can be used by a price-maker producer in order to devise an optimal bidding strategy....... in the literature on modeling this uncertainty. In this study we introduce a Bayesian inference approach to reveal the aggregate supply curve in a day-ahead electricity market. The proposed algorithm relies on Markov Chain Monte Carlo and Sequential Monte Carlo methods. The major appeal of this approach...

  20. An Adaptive Data Collection Algorithm Based on a Bayesian Compressed Sensing Framework

    Directory of Open Access Journals (Sweden)

    Zhi Liu

    2014-05-01

    Full Text Available For Wireless Sensor Networks, energy efficiency is always a key consideration in system design. Compressed sensing is a new theory which has promising prospects in WSNs. However, how to construct a sparse projection matrix is a problem. In this paper, based on a Bayesian compressed sensing framework, a new adaptive algorithm which can integrate routing and data collection is proposed. By introducing new target node selection metrics, embedding the routing structure and maximizing the differential entropy for each collection round, an adaptive projection vector is constructed. Simulations show that compared to reference algorithms, the proposed algorithm can decrease computation complexity and improve energy efficiency.

  1. Dimensionality reduction in Bayesian estimation algorithms

    Directory of Open Access Journals (Sweden)

    G. W. Petty

    2013-09-01

    Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  2. Dimensionality reduction in Bayesian estimation algorithms

    Science.gov (United States)

    Petty, G. W.

    2013-09-01

    An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M) of pseudochannels while also regularizing the background (geophysical plus instrument) noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals - whether Bayesian or not - lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  3. Classifying emotion in Twitter using Bayesian network

    Science.gov (United States)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  4. How few? Bayesian statistics in injury biomechanics.

    Science.gov (United States)

    Cutcliffe, Hattie C; Schmidt, Allison L; Lucas, Joseph E; Bass, Cameron R

    2012-10-01

    In injury biomechanics, there are currently no general a priori estimates of how few specimens are necessary to obtain sufficiently accurate injury risk curves for a given underlying distribution. Further, several methods are available for constructing these curves, and recent methods include Bayesian survival analysis. This study used statistical simulations to evaluate the fidelity of different injury risk methods using limited sample sizes across four different underlying distributions. Five risk curve techniques were evaluated, including Bayesian techniques. For the Bayesian analyses, various prior distributions were assessed, each incorporating more accurate information. Simulated subject injury and biomechanical input values were randomly sampled from each underlying distribution, and injury status was determined by comparing these values. Injury risk curves were developed for this data using each technique for various small sample sizes; for each, analyses on 2000 simulated data sets were performed. Resulting median predicted risk values and confidence intervals were compared with the underlying distributions. Across conditions, the standard and Bayesian survival analyses better represented the underlying distributions included in this study, especially for extreme (1, 10, and 90%) risk. This study demonstrates that the value of the Bayesian analysis is the use of informed priors. As the mean of the prior approaches the actual value, the sample size necessary for good reproduction of the underlying distribution with small confidence intervals can be as small as 2. This study provides estimates of confidence intervals and number of samples to allow the selection of the most appropriate sample sizes given known information.

  5. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  6. Computationally efficient Bayesian inference for inverse problems.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  7. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...

  8. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  9. Robust Utility Maximization Under Convex Portfolio Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)

    2015-04-15

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.

  10. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  11. Maximizing scientific knowledge from randomized clinical trials

    DEFF Research Database (Denmark)

    Gustafsson, Finn; Atar, Dan; Pitt, Bertram

    2010-01-01

    Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...

  12. Maximizing Learning Potential in the Communicative Classroom.

    Science.gov (United States)

    Kumaravadivelu, B.

    1993-01-01

    A classroom observational study is presented to assess whether a macrostrategies framework will help communicative language teaching teachers to maximize learner potential in the classroom. Analysis of two classroom episodes revealed that one episode was evidently more communicative than the other. (seven references) (VWL)

  13. Faculty Salaries and the Maximization of Prestige

    Science.gov (United States)

    Melguizo, Tatiana; Strober, Myra H.

    2007-01-01

    Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…

  14. Ehrenfest's Lottery--Time and Entropy Maximization

    Science.gov (United States)

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  15. A Model of College Tuition Maximization

    Science.gov (United States)

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  16. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...

  17. Coverage maximization under resource constraints using ...

    Indian Academy of Sciences (India)

    2015-02-07

    Feb 7, 2015 ... Dissemination of information has been one of the prime needs in almost every kind of communication network. The existing algorithms for this service, try to maximize the coverage, i.e., the number of distinct nodes to which a given piece of information could be conveyed under the constraints of time and ...

  18. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  19. Implementation of upper limit calculation for a poisson variable by bayesian approach

    International Nuclear Information System (INIS)

    Zhu Yongsheng

    2008-01-01

    The calculation of Bayesian confidence upper limit for a Poisson variable including both signal and background with and without systematic uncertainties has been formulated. A Fortran 77 routine, BPULE, has been developed to implement the calculation. The routine can account for systematic uncertainties in the background expectation and signal efficiency. The systematic uncertainties may be separately parameterized by a Gaussian, Log-Gaussian or flat probability density function (pdf). Some technical details of BPULE have been discussed. (authors)

  20. Unified Bayesian situation assessment sensor management

    Science.gov (United States)

    El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Alford, M.

    2005-05-01

    Sensor management in support of situation assessment (SA) presents a daunting theoretical and practical challenge. We demonstrate new results using a foundational, joint control-theoretic approach to SA and SA sensor management that is based on three concepts: (1) a "dynamic situational significance map" that mathematically specifies the meaning of tactical significance for a given theater of interest at a given moment; (2) an intuitively meaningful and potentially computationally tractable objective function for SA, namely maximization of the expected number of targets of tactical interest; and (3) integration of these two concepts with approximate multitarget filters (specifically, first-order multitarget moment filters and multi-hypothesis correlator (MHC) engines). Under this approach, sensors will be directed to preferentially collect observations from targets of actual or potential tactical significance, according to an adaptively modified definition of tactical significance. Result of testing this sensor management algorithm with significance maps defined in terms of target's location, speed, and heading will be presented. Testing is performed against simulated data, and different sensor management algorithms including the proposed are compared.

  1. Run-time revenue maximization for composite web services with response time commitments

    NARCIS (Netherlands)

    Živković, M.; Bosman, J.W.; Berg, H. van den; Mei, R. van der; Meeuwissen, H.B.; Núñez-Queija, R.

    2012-01-01

    We investigate dynamic decision mechanisms for composite web services maximizing the expected revenue for the providers of composite services. A composite web service is represented by a (sequential) workflow, and for each task within this workflow, a number of service alternatives may be available.

  2. Run-time Revenue Maximization for Composite Web Services with Response Time Commitments

    NARCIS (Netherlands)

    Zivkovic, Miroslav; Bosman, J.W.; van den Berg, Hans Leo; van der Mei, R.D.; Meeuwissen, H.B.; Nunez Queija, R.

    We investigate dynamic decision mechanisms for composite web services maximizing the expected revenue for the providers of composite services. A composite web service is represented by a (sequential) workflow, and for each task within this workflow, a number of service alternatives may be available.

  3. Runtime revenue maximization for composite Web services with response-time commitments

    NARCIS (Netherlands)

    M. Zivkovic; J.W. Bosman (Joost); J.L. van den Berg (Hans); H.B. Meeuwissen; R.D. van der Mei (Rob); R. Núñez Queija (Rudesindo)

    2012-01-01

    htmlabstractWe investigate dynamic decision mechanisms for composite web services maximizing the expected revenue for the providers of composite services. A composite web service is represented by a (sequential) workflow, and for each of the tasks within this workflow, a number of service

  4. Application of a Bayesian non-linear model hybrid scheme to sequence data for genomic prediction and QTL mapping.

    Science.gov (United States)

    Wang, Tingting; Chen, Yi-Ping Phoebe; MacLeod, Iona M; Pryce, Jennie E; Goddard, Michael E; Hayes, Ben J

    2017-08-15

    Using whole genome sequence data might improve genomic prediction accuracy, when compared with high-density SNP arrays, and could lead to identification of casual mutations affecting complex traits. For some traits, the most accurate genomic predictions are achieved with non-linear Bayesian methods. However, as the number of variants and the size of the reference population increase, the computational time required to implement these Bayesian methods (typically with Monte Carlo Markov Chain sampling) becomes unfeasibly long. Here, we applied a new method, HyB_BR (for Hybrid BayesR), which implements a mixture model of normal distributions and hybridizes an Expectation-Maximization (EM) algorithm followed by Markov Chain Monte Carlo (MCMC) sampling, to genomic prediction in a large dairy cattle population with imputed whole genome sequence data. The imputed whole genome sequence data included 994,019 variant genotypes of 16,214 Holstein and Jersey bulls and cows. Traits included fat yield, milk volume, protein kg, fat% and protein% in milk, as well as fertility and heat tolerance. HyB_BR achieved genomic prediction accuracies as high as the full MCMC implementation of BayesR, both for predicting a validation set of Holstein and Jersey bulls (multi-breed prediction) and a validation set of Australian Red bulls (across-breed prediction). HyB_BR had a ten fold reduction in compute time, compared with the MCMC implementation of BayesR (48 hours versus 594 hours). We also demonstrate that in many cases HyB_BR identified sequence variants with a high posterior probability of affecting the milk production or fertility traits that were similar to those identified in BayesR. For heat tolerance, both HyB_BR and BayesR found variants in or close to promising candidate genes associated with this trait and not detected by previous studies. The results demonstrate that HyB_BR is a feasible method for simultaneous genomic prediction and QTL mapping with whole genome sequence in

  5. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  6. Bayesian genomic selection: the effect of haplotype lenghts and priors

    DEFF Research Database (Denmark)

    Villumsen, Trine Michelle; Janss, Luc

    2009-01-01

    Breeding values for animals with marker data are estimated using a genomic selection approach where data is analyzed using Bayesian multi-marker association models. Fourteen model scenarios with varying haplotype lengths, hyper parameter and prior distributions were compared to find the scenario...... expected to give the most correct genomic estimated breeding values for animals with marker information only. Five-fold cross validation was performed to assess the ability of models to estimate breeding values for animals in generation 3. In each of the five subsets, 20% of phenotypic records...... well. Correlations were 0.77-0.89 and predicted breeding values were biased. In addition the models seemed to over fit the genomic part of the variation. Highest correlations and most unbiased results were obtained when SNP markers were joined into haplotypes. Especially the scenarios with 5-SNP...

  7. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  8. Bayesian modeling of unknown diseases for biosurveillance.

    Science.gov (United States)

    Shen, Yanna; Cooper, Gregory F

    2009-11-14

    This paper investigates Bayesian modeling of unknown causes of events in the context of disease-outbreak detection. We introduce a Bayesian approach that models and detects both (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A key contribution of this paper is that it introduces a Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has broad applicability in medical informatics, where the space of known causes of outcomes of interest is seldom complete.

  9. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  10. Bayesian Inference in Polling Technique: 1992 Presidential Polls.

    Science.gov (United States)

    Satake, Eiki

    1994-01-01

    Explores the potential utility of Bayesian statistical methods in determining the predictability of multiple polls. Compares Bayesian techniques to the classical statistical method employed by pollsters. Considers these questions in the context of the 1992 presidential elections. (HB)

  11. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  12. Comparison of Bayesian penalized likelihood reconstruction versus OS-EM for characterization of small pulmonary nodules in oncologic PET/CT.

    Science.gov (United States)

    Howard, Brandon A; Morgan, Rustain; Thorpe, Matthew P; Turkington, Timothy G; Oldan, Jorge; James, Olga G; Borges-Neto, Salvador

    2017-10-01

    To determine whether the recently introduced Bayesian penalized likelihood PET reconstruction (Q.Clear) increases the visual conspicuity and SUV max of small pulmonary nodules near the PET resolution limit, relative to ordered subset expectation maximization (OS-EM). In this institutional review board-approved and HIPAA-compliant study, 29 FDG PET/CT scans performed on a five-ring GE Discovery IQ were retrospectively selected for pulmonary nodules described in the radiologist's report as "too small to characterize", or small lung nodules in patients at high risk for lung cancer. Thirty-two pulmonary nodules were assessed, with mean CT diameter of 8 mm (range 2-18). PET images were reconstructed with OS-EM and Q.Clear with noise penalty strength β values of 150, 250, and 350. Lesion visual conspicuity was scored by three readers on a 3-point scale, and lesion SUV max and background liver and blood pool SUV mean and SUV stdev were recorded. Comparison was made by linear mixed model with modified Bonferroni post hoc testing; significance cutoff was p OS-EM at β = 150 (p OS-EM at β = 150 and 250 (p OS-EM reconstruction, but only with low noise penalization. Q.Clear with β = 150 may be advantageous when evaluation of small pulmonary nodules is of primary concern.

  13. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  14. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  15. Motion Learning Based on Bayesian Program Learning

    Directory of Open Access Journals (Sweden)

    Cheng Meng-Zhen

    2017-01-01

    Full Text Available The concept of virtual human has been highly anticipated since the 1980s. By using computer technology, Human motion simulation could generate authentic visual effect, which could cheat human eyes visually. Bayesian Program Learning train one or few motion data, generate new motion data by decomposing and combining. And the generated motion will be more realistic and natural than the traditional one.In this paper, Motion learning based on Bayesian program learning allows us to quickly generate new motion data, reduce workload, improve work efficiency, reduce the cost of motion capture, and improve the reusability of data.

  16. Bayesian inference and the parametric bootstrap

    Science.gov (United States)

    Efron, Bradley

    2013-01-01

    The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance sampling formulas take on an easy form relating to the deviance in exponential families, and are particularly simple starting from Jeffreys invariant prior. Because of the i.i.d. nature of bootstrap sampling, familiar formulas describe the computational accuracy of the Bayes estimates. Besides computational methods, the theory provides a connection between Bayesian and frequentist analysis. Efficient algorithms for the frequentist accuracy of Bayesian inferences are developed and demonstrated in a model selection example. PMID:23843930

  17. Length Scales in Bayesian Automatic Adaptive Quadrature

    Directory of Open Access Journals (Sweden)

    Adam Gh.

    2016-01-01

    Full Text Available Two conceptual developments in the Bayesian automatic adaptive quadrature approach to the numerical solution of one-dimensional Riemann integrals [Gh. Adam, S. Adam, Springer LNCS 7125, 1–16 (2012] are reported. First, it is shown that the numerical quadrature which avoids the overcomputing and minimizes the hidden floating point loss of precision asks for the consideration of three classes of integration domain lengths endowed with specific quadrature sums: microscopic (trapezoidal rule, mesoscopic (Simpson rule, and macroscopic (quadrature sums of high algebraic degrees of precision. Second, sensitive diagnostic tools for the Bayesian inference on macroscopic ranges, coming from the use of Clenshaw-Curtis quadrature, are derived.

  18. Length Scales in Bayesian Automatic Adaptive Quadrature

    Science.gov (United States)

    Adam, Gh.; Adam, S.

    2016-02-01

    Two conceptual developments in the Bayesian automatic adaptive quadrature approach to the numerical solution of one-dimensional Riemann integrals [Gh. Adam, S. Adam, Springer LNCS 7125, 1-16 (2012)] are reported. First, it is shown that the numerical quadrature which avoids the overcomputing and minimizes the hidden floating point loss of precision asks for the consideration of three classes of integration domain lengths endowed with specific quadrature sums: microscopic (trapezoidal rule), mesoscopic (Simpson rule), and macroscopic (quadrature sums of high algebraic degrees of precision). Second, sensitive diagnostic tools for the Bayesian inference on macroscopic ranges, coming from the use of Clenshaw-Curtis quadrature, are derived.

  19. Theoretical maximal storage of hydrogen in zeolitic frameworks.

    Science.gov (United States)

    Vitillo, Jenny G; Ricchiardi, Gabriele; Spoto, Giuseppe; Zecchina, Adriano

    2005-12-07

    Physisorption and encapsulation of molecular hydrogen in tailored microporous materials are two of the options for hydrogen storage. Among these materials, zeolites have been widely investigated. In these materials, the attained storage capacities vary widely with structure and composition, leading to the expectation that materials with improved binding sites, together with lighter frameworks, may represent efficient storage materials. In this work, we address the problem of the determination of the maximum amount of molecular hydrogen which could, in principle, be stored in a given zeolitic framework, as limited by the size, structure and flexibility of its pore system. To this end, the progressive filling with H2 of 12 purely siliceous models of common zeolite frameworks has been simulated by means of classical molecular mechanics. By monitoring the variation of cell parameters upon progressive filling of the pores, conclusions are drawn regarding the maximum storage capacity of each framework and, more generally, on framework flexibility. The flexible non-pentasils RHO, FAU, KFI, LTA and CHA display the highest maximal capacities, ranging between 2.86-2.65 mass%, well below the targets set for automotive applications but still in an interesting range. The predicted maximal storage capacities correlate well with experimental results obtained at low temperature. The technique is easily extendable to any other microporous structure, and it can provide a method for the screening of hypothetical new materials for hydrogen storage applications.

  20. Low-energy restoration of parity and maximal symmetry

    International Nuclear Information System (INIS)

    Raychaudhuri, A.; Sarkar, U.

    1982-01-01

    The maximal symmetry of fermions of one generation, SU(16), which includes the left-right-symmetric Pati-Salam group, SU(4)/sub c/ x SU(2) /sub L/ x SU(2)/sub R/, as a subgroup, allows the possibility of a low-energy (M/sub R/approx.100 GeV) breaking of the left-right symmetry. It is known that such a low-energy restoration of parity can be consistent with weak-interaction phenomenology. We examine different chains of descent of SU(16) that admit a low value of M/sub R/ and determine the other intermediate symmetry-breaking mass scales associated with each of these chains. These additional mass scales provide an alternative to the ''great desert'' expected in some grand unifying models. The contributions of the Higgs fields in the renormalization-group equations are retained and are found to be important

  1. The price of anarchy is maximized at the percolation threshold

    Science.gov (United States)

    Skinner, Brian

    2015-03-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  2. Price of anarchy is maximized at the percolation threshold

    Science.gov (United States)

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  3. Price of anarchy is maximized at the percolation threshold

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called \\price of anarchy" (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed "congestible" and "incongestible" links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Further, for large networks the value of the POA appears to saturate at its theoretical maximum value.

  4. Price of anarchy is maximized at the percolation threshold.

    Science.gov (United States)

    Skinner, Brian

    2015-05-01

    When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.

  5. Teaching for more effective learning: Seven maxims for practice

    International Nuclear Information System (INIS)

    McMahon, Tim

    2006-01-01

    Starting from the assumption that deep learning, which seeks lasting mastery over a subject, is more desirable in professional education than shallow learning, which is merely designed to pass academic assessments, this paper suggests ways in which teachers in higher education can encourage the former. Noting that students tend to adopt either a shallow or deep approach in response to their experiences in the classroom and their understanding of what the assessment regime requires, it argues that, as a consequence, it ought to be possible to prompt more students to adopt deep learning approaches by manipulating teaching and assessment strategies. The literature on teaching and learning is explored in order to derive maxims of good practice which, if followed, can reasonably be expected to promote deep learning and discourage surface learning. It is argued that this will lead to more effective preparation for the real world of professional practice

  6. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  7. Prior approval: the growth of Bayesian methods in psychology.

    Science.gov (United States)

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  8. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  9. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens|info:eu-repo/dai/nl/304833207; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G|info:eu-repo/dai/nl/081831218

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  10. A gentle introduction to Bayesian analysis : Applications to developmental research

    NARCIS (Netherlands)

    van de Schoot, R.; Kaplan, D.; Denissen, J.J.A.; Asendorpf, J.B.; Neyer, F.J.; van Aken, M.A.G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  11. Implementing and testing Bayesian and maximum-likelihood supertree methods in phylogenetics.

    Science.gov (United States)

    Akanni, Wasiu A; Wilkinson, Mark; Creevey, Christopher J; Foster, Peter G; Pisani, Davide

    2015-08-01

    Since their advent, supertrees have been increasingly used in large-scale evolutionary studies requiring a phylogenetic framework and substantial efforts have been devoted to developing a wide variety of supertree methods (SMs). Recent advances in supertree theory have allowed the implementation of maximum likelihood (ML) and Bayesian SMs, based on using an exponential distribution to model incongruence between input trees and the supertree. Such approaches are expected to have advantages over commonly used non-parametric SMs, e.g. matrix representation with parsimony (MRP). We investigated new implementations of ML and Bayesian SMs and compared these with some currently available alternative approaches. Comparisons include hypothetical examples previously used to investigate biases of SMs with respect to input tree shape and size, and empirical studies based either on trees harvested from the literature or on trees inferred from phylogenomic scale data. Our results provide no evidence of size or shape biases and demonstrate that the Bayesian method is a viable alternative to MRP and other non-parametric methods. Computation of input tree likelihoods allows the adoption of standard tests of tree topologies (e.g. the approximately unbiased test). The Bayesian approach is particularly useful in providing support values for supertree clades in the form of posterior probabilities.

  12. [Chemical constituents of Trichosanthes kirilowii Maxim].

    Science.gov (United States)

    Sun, Xiao-Ye; Wu, Hong-Hua; Fu, Ai-Zhen; Zhang, Peng

    2012-07-01

    To study the chemical constituents of Trichosanthes kirilowii Maxim., chromatographic methods such as D101 macroporous resin, silica gel column chromatographic technology, Sephadex LH-20, octadecylsilyl (ODS) column chromatographic technique and preparative HPLC were used and nine compounds were isolated from a 95% (v/v) ethanol extract of the plant. By using spectroscopic techniques including 1H NMR, 13C NMR, 1H-1H COSY, HSQC and HMBC, these compounds were identified as 5-ethoxymethyl-1-carboxyl propyl-1H-pyrrole-2-carbaldehyde (1), 5-hydroxymethyl-2-furfural (2), chrysoeriol (3), 4'-hydroxyscutellarin (4), vanillic acid (5), alpha-spinasterol (6), beta-D-glucopyranosyl-a-spinasterol (7), stigmast-7-en-3beta-ol (8), and adenosine (9), separately. Among them, compound 1 is a new compound, and compounds 3, 4 and 5 are isolated from the genus Trichosanthes kirilowii Maxim. for the first time.

  13. Loop Amplitude Diagrams in Manifest, Maximal Supergravity

    Science.gov (United States)

    Karlsson, Anna

    The issue of finiteness of maximal supergravity has been subject to research for quite some time. Here, we approach that question through an examination of how to describe amplitude diagrams in D = 11 maximal supergravity from a field theory point of view. The strength of the formulation is the presence of manifest supersymmetry through the use of pure spinors. An initial analysis of what the subsequent characteristics turn out to be, partly in lower dimensions through dimensional reduction, gives at hand results that agree with previous work, pointing towards a first divergence for the 7-loop contribution to the 4-point amplitude in four dimensions. The text is mainly based on and may be regarded as an introduction to the main points presented there.

  14. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  15. Dynamical supersymmetry in maximally supersymmetric gauge theories

    Science.gov (United States)

    Belyaev, Dmitry V.

    2010-06-01

    Maximally supersymmetric theories can be described by a single scalar superfield in light-cone superspace. When they are also (super)conformally invariant, they are uniquely specified by the form of the dynamical supersymmetry. We present an explicit derivation of the light-cone superspace form of the dynamical supersymmetry in the cases of ten- and four-dimensional super-Yang-Mills, and the three-dimensional Bagger-Lambert-Gustavsson theory, starting from the covariant formulation of these theories.

  16. Definable maximal discrete sets in forcing extensions

    DEFF Research Database (Denmark)

    Törnquist, Asger Dag; Schrittesser, David

    2018-01-01

    Let  be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...

  17. Increasing Efficiency by Maximizing Electrical Output

    Science.gov (United States)

    2016-07-27

    turbines whose efficiency decreases with system size driving up costs . The cost in $/W for our system is substantially less than other competitive...basis. That forecast of a $100,000 sales price is based on a detailed analysis of anticipated bill of materials costs when manufacturing and...27-07-2016 Cost and Performance Report 04/01/2012 - 07/01/2016 Increasing Efficiency by Maximizing Electrical Output Ted Eveleth, COO, Ener-G-Rotors

  18. Maximal Linear Embedding for Dimensionality Reduction.

    Science.gov (United States)

    Wang, Ruiping; Shan, Shiguang; Chen, Xilin; Chen, Jie; Gao, Wen

    2011-09-01

    Over the past few decades, dimensionality reduction has been widely exploited in computer vision and pattern analysis. This paper proposes a simple but effective nonlinear dimensionality reduction algorithm, named Maximal Linear Embedding (MLE). MLE learns a parametric mapping to recover a single global low-dimensional coordinate space and yields an isometric embedding for the manifold. Inspired by geometric intuition, we introduce a reasonable definition of locally linear patch, Maximal Linear Patch (MLP), which seeks to maximize the local neighborhood in which linearity holds. The input data are first decomposed into a collection of local linear models, each depicting an MLP. These local models are then aligned into a global coordinate space, which is achieved by applying MDS to some randomly selected landmarks. The proposed alignment method, called Landmarks-based Global Alignment (LGA), can efficiently produce a closed-form solution with no risk of local optima. It just involves some small-scale eigenvalue problems, while most previous aligning techniques employ time-consuming iterative optimization. Compared with traditional methods such as ISOMAP and LLE, our MLE yields an explicit modeling of the intrinsic variation modes of the observation data. Extensive experiments on both synthetic and real data indicate the effectivity and efficiency of the proposed algorithm.

  19. Performance of Optimally Merged Multisatellite Precipitation Products Using the Dynamic Bayesian Model Averaging Scheme Over the Tibetan Plateau

    Science.gov (United States)

    Ma, Yingzhao; Hong, Yang; Chen, Yang; Yang, Yuan; Tang, Guoqiang; Yao, Yunjun; Long, Di; Li, Changmin; Han, Zhongying; Liu, Ronghua

    2018-01-01

    Accurate estimation of precipitation from satellites at high spatiotemporal scales over the Tibetan Plateau (TP) remains a challenge. In this study, we proposed a general framework for blending multiple satellite precipitation data using the dynamic Bayesian model averaging (BMA) algorithm. The blended experiment was performed at a daily 0.25° grid scale for 2007-2012 among Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42RT and 3B42V7, Climate Prediction Center MORPHing technique (CMORPH), and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR). First, the BMA weights were optimized using the expectation-maximization (EM) method for each member on each day at 200 calibrated sites and then interpolated to the entire plateau using the ordinary kriging (OK) approach. Thus, the merging data were produced by weighted sums of the individuals over the plateau. The dynamic BMA approach showed better performance with a smaller root-mean-square error (RMSE) of 6.77 mm/day, higher correlation coefficient of 0.592, and closer Euclid value of 0.833, compared to the individuals at 15 validated sites. Moreover, BMA has proven to be more robust in terms of seasonality, topography, and other parameters than traditional ensemble methods including simple model averaging (SMA) and one-outlier removed (OOR). Error analysis between BMA and the state-of-the-art IMERG in the summer of 2014 further proved that the performance of BMA was superior with respect to multisatellite precipitation data merging. This study demonstrates that BMA provides a new solution for blending multiple satellite data in regions with limited gauges.

  20. Bayesian penalized-likelihood reconstruction algorithm suppresses edge artifacts in PET reconstruction based on point-spread-function.

    Science.gov (United States)

    Yamaguchi, Shotaro; Wagatsuma, Kei; Miwa, Kenta; Ishii, Kenji; Inoue, Kazumasa; Fukushi, Masahiro

    2018-03-01

    The Bayesian penalized-likelihood reconstruction algorithm (BPL), Q.Clear, uses relative difference penalty as a regularization function to control image noise and the degree of edge-preservation in PET images. The present study aimed to determine the effects of suppression on edge artifacts due to point-spread-function (PSF) correction using a Q.Clear. Spheres of a cylindrical phantom contained a background of 5.3 kBq/mL of [ 18 F]FDG and sphere-to-background ratios (SBR) of 16, 8, 4 and 2. The background also contained water and spheres containing 21.2 kBq/mL of [ 18 F]FDG as non-background. All data were acquired using a Discovery PET/CT 710 and were reconstructed using three-dimensional ordered-subset expectation maximization with time-of-flight (TOF) and PSF correction (3D-OSEM), and Q.Clear with TOF (BPL). We investigated β-values of 200-800 using BPL. The PET images were analyzed using visual assessment and profile curves, edge variability and contrast recovery coefficients were measured. The 38- and 27-mm spheres were surrounded by higher radioactivity concentration when reconstructed with 3D-OSEM as opposed to BPL, which suppressed edge artifacts. Images of 10-mm spheres had sharper overshoot at high SBR and non-background when reconstructed with BPL. Although contrast recovery coefficients of 10-mm spheres in BPL decreased as a function of increasing β, higher penalty parameter decreased the overshoot. BPL is a feasible method for the suppression of edge artifacts of PSF correction, although this depends on SBR and sphere size. Overshoot associated with BPL caused overestimation in small spheres at high SBR. Higher penalty parameter in BPL can suppress overshoot more effectively. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.