WorldWideScience

Sample records for bayesian partition method

  1. A hierarchical Bayesian model to incorporate uncertainty into methods for diversity partitioning.

    Science.gov (United States)

    Marion, Zachary H; Fordyce, James A; Fitzpatrick, Benjamin M

    2018-04-01

    Recently there have been major theoretical advances in the quantification and partitioning of diversity within and among communities, regions, and ecosystems. However, applying those advances to real data remains a challenge. Ecologists often end up describing their samples rather than estimating the diversity components of an underlying study system, and existing approaches do not easily provide statistical frameworks for testing ecological questions. Here we offer one avenue to do all of the above using a hierarchical Bayesian approach. We estimate posterior distributions of the underlying "true" relative abundances of each species within each unit sampled. These posterior estimates of relative abundance can then be used with existing formulae to estimate and partition diversity. The result is a posterior distribution of diversity metrics describing our knowledge (or beliefs) about the study system. This approach intuitively leads to statistical inferences addressing biologically motivated hypotheses via Bayesian model comparison. Using simulations, we demonstrate that our approach does as well or better at approximating the "true" diversity of a community relative to naïve or ad-hoc bias-corrected estimates. Moreover, model comparison correctly distinguishes between alternative hypotheses about the distribution of diversity within and among samples. Finally, we use an empirical ecological dataset to illustrate how the approach can be used to address questions about the makeup and diversities of assemblages at local and regional scales. © 2018 by the Ecological Society of America.

  2. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  3. Development of partitioning method

    International Nuclear Information System (INIS)

    Kubota, Kazuo; Dojiri, Shigeru; Kubota, Masumitsu

    1988-10-01

    The literature survey was carried out on the amount of natural resources, behaviors in reprocessing process and in separation and recovery methods of the platinum group elements and technetium which are contained in spent fuel. The essential results are described below. (1) The platinum group elements, which are contained in spent fuel, are quantitatively limited, compared with total demand for them in Japan. And estimated separation and recovery cost is rather high. In spite of that, development of these techniques is considered to be very important because the supply of these elements is almost from foreign resources in Japan. (2) For recovery of these elements, studies of recovery from undisolved residue and from high level liquid waste (HLLW) also seem to be required. (3) As separation and recovery methods, following techniques are considered to be effective; lead extraction, liquid metal extraction, solvent extraction, ion-exchange, adsorption, precipitation, distillation, electrolysis or their combination. (4) But each of these methods has both advantages and disadvantages. So development of such processes largely depends on future works. (author) 94 refs

  4. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  5. Isotope partitioning of soil respiration: A Bayesian solution to accommodate multiple sources of variability

    Science.gov (United States)

    Ogle, Kiona; Pendall, Elise

    2015-02-01

    Isotopic methods offer great potential for partitioning trace gas fluxes such as soil respiration into their different source contributions. Traditional partitioning methods face challenges due to variability introduced by different measurement methods, fractionation effects, and end-member uncertainty. To address these challenges, we describe a hierarchical Bayesian (HB) approach for isotopic partitioning of soil respiration that directly accommodates such variability. We apply our HB method to data from an experiment conducted in a shortgrass steppe ecosystem, where decomposition was previously shown to be stimulated by elevated CO2. Our approach simultaneously fits Keeling plot (KP) models to observations of soil or soil-respired δ13C and [CO2] obtained via chambers and gas wells, corrects the KP intercepts for apparent fractionation (Δ) due to isotope-specific diffusion rates and/or method artifacts, estimates method- and treatment-specific values for Δ, propagates end-member uncertainty, and calculates proportional contributions from two distinct respiration sources ("old" and "new" carbon). The chamber KP intercepts were estimated with greater confidence than the well intercepts and compared to the theoretical value of 4.4‰, our results suggest that Δ varies between 2 and 5.2‰ depending on method (chambers versus wells) and CO2 treatment. Because elevated CO2 plots were fumigated with 13C-depleted CO2, the source contributions were tightly constrained, and new C accounted for 64% (range = 55-73%) of soil respiration. The contributions were less constrained for the ambient CO2 treatments, but new C accounted for significantly less (47%, range = 15-82%) of soil respiration. Our new HB partitioning approach contrasts our original analysis (higher contribution of old C under elevated CO2) because it uses additional data sources, accounts for end-member bias, and estimates apparent fractionation effects.

  6. Bayesian adaptive methods for clinical trials

    CERN Document Server

    Berry, Scott M; Muller, Peter

    2010-01-01

    Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer's disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISP...

  7. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  8. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  9. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  10. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  11. Bayesian methods applied to GWAS.

    Science.gov (United States)

    Fernando, Rohan L; Garrick, Dorian

    2013-01-01

    Bayesian multiple-regression methods are being successfully used for genomic prediction and selection. These regression models simultaneously fit many more markers than the number of observations available for the analysis. Thus, the Bayes theorem is used to combine prior beliefs of marker effects, which are expressed in terms of prior distributions, with information from data for inference. Often, the analyses are too complex for closed-form solutions and Markov chain Monte Carlo (MCMC) sampling is used to draw inferences from posterior distributions. This chapter describes how these Bayesian multiple-regression analyses can be used for GWAS. In most GWAS, false positives are controlled by limiting the genome-wise error rate, which is the probability of one or more false-positive results, to a small value. As the number of test in GWAS is very large, this results in very low power. Here we show how in Bayesian GWAS false positives can be controlled by limiting the proportion of false-positive results among all positives to some small value. The advantage of this approach is that the power of detecting associations is not inversely related to the number of markers.

  12. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  13. Maximum entropy and Bayesian methods

    International Nuclear Information System (INIS)

    Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.

    1992-01-01

    Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

  14. Bayesian flood forecasting methods: A review

    Science.gov (United States)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been

  15. Spatially Partitioned Embedded Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2013-10-30

    We study spatially partitioned embedded Runge--Kutta (SPERK) schemes for partial differential equations (PDEs), in which each of the component schemes is applied over a different part of the spatial domain. Such methods may be convenient for problems in which the smoothness of the solution or the magnitudes of the PDE coefficients vary strongly in space. We focus on embedded partitioned methods as they offer greater efficiency and avoid the order reduction that may occur in nonembedded schemes. We demonstrate that the lack of conservation in partitioned schemes can lead to nonphysical effects and propose conservative additive schemes based on partitioning the fluxes rather than the ordinary differential equations. A variety of SPERK schemes are presented, including an embedded pair suitable for the time evolution of fifth-order weighted nonoscillatory spatial discretizations. Numerical experiments are provided to support the theory.

  16. Innovative Bayesian and Parsimony Phylogeny of Dung Beetles (Coleoptera, Scarabaeidae, Scarabaeinae) Enhanced by Ontology-Based Partitioning of Morphological Characters

    Science.gov (United States)

    Tarasov, Sergei; Génier, François

    2015-01-01

    Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a

  17. Conceptual methods for actinide partitioning

    International Nuclear Information System (INIS)

    Leuze, R.E.; Bond, W.D.; Tedder, D.W.

    1978-01-01

    The conceptual processing sequence under consideration is based on a combination of modified Purex processing and secondary processing of the high-level waste. In this concept, iodine will be removed from dissolver solution prior to extraction, and the Purex processing will be modified so that low- and intermediate-level wastes, all the way through final product purification, are recycled. A supplementary extraction is assumed to ensure adequate recovery of uranium, neptunium and possibly plutonium. Technetium may be removed from the high-level waste if a satisfactory method can be developed. Extraction into a quaternary amine is being evaluated for this removal. Methods that have been used in the past to recover americium and curium have some rather serious deficiencies, including inadequate recovery, solids formation and generation of large volumes of low- and intermediate-level wastes containing significant quantities of chemical reagents

  18. Phylogenetic systematics and biogeography of hummingbirds: Bayesian and maximum likelihood analyses of partitioned data and selection of an appropriate partitioning strategy.

    Science.gov (United States)

    McGuire, Jimmy A; Witt, Christopher C; Altshuler, Douglas L; Remsen, J V

    2007-10-01

    Hummingbirds are an important model system in avian biology, but to date the group has been the subject of remarkably few phylogenetic investigations. Here we present partitioned Bayesian and maximum likelihood phylogenetic analyses for 151 of approximately 330 species of hummingbirds and 12 outgroup taxa based on two protein-coding mitochondrial genes (ND2 and ND4), flanking tRNAs, and two nuclear introns (AK1 and BFib). We analyzed these data under several partitioning strategies ranging between unpartitioned and a maximum of nine partitions. In order to select a statistically justified partitioning strategy following partitioned Bayesian analysis, we considered four alternative criteria including Bayes factors, modified versions of the Akaike information criterion for small sample sizes (AIC(c)), Bayesian information criterion (BIC), and a decision-theoretic methodology (DT). Following partitioned maximum likelihood analyses, we selected a best-fitting strategy using hierarchical likelihood ratio tests (hLRTS), the conventional AICc, BIC, and DT, concluding that the most stringent criterion, the performance-based DT, was the most appropriate methodology for selecting amongst partitioning strategies. In the context of our well-resolved and well-supported phylogenetic estimate, we consider the historical biogeography of hummingbirds using ancestral state reconstructions of (1) primary geographic region of occurrence (i.e., South America, Central America, North America, Greater Antilles, Lesser Antilles), (2) Andean or non-Andean geographic distribution, and (3) minimum elevational occurrence. These analyses indicate that the basal hummingbird assemblages originated in the lowlands of South America, that most of the principle clades of hummingbirds (all but Mountain Gems and possibly Bees) originated on this continent, and that there have been many (at least 30) independent invasions of other primary landmasses, especially Central America.

  19. Stochastic Graph Partition: Generalizing the Swendsen-Wang Method

    OpenAIRE

    Barbu, Adrian; Zhu, Song-Chun

    2003-01-01

    Vision tasks, such as segmentation, grouping, recognition, and learning, have a "what-goes-with-what" component. It can be formulated as partitioning an adjacent graph into a number of subgraphs, each being a "coherent" visual pattern in the sense of optimizing a Bayesian posterior probability or minimizing an energy functional. In this paper, we generalize Swendsen-Wang (1987)- a well celebrated algorithm in statistical mechanics-for general graph partition. Our objective is to design revers...

  20. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  1. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...

  2. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  3. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... and computational complexity. We also analyze the impact of transceiver filters on the sparseness of the channel response, and propose a dictionary design that permits the deployment of sparse inference methods in conditions of low bandwidth....

  4. Deep Learning and Bayesian Methods

    OpenAIRE

    Prosper Harrison B.

    2017-01-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such meth...

  5. Essays on portfolio choice with Bayesian methods

    OpenAIRE

    Kebabci, Deniz

    2007-01-01

    How investors should allocate assets to their portfolios in the presence of predictable components in asset returns is a question of great importance in finance. While early studies took the return generating process as given, recent studies have addressed issues such as parameter estimation and model uncertainty. My dissertation develops Bayesian methods for portfolio choice - and industry allocation in particular - under parameter and model uncertainty. The first chapter of my dissertation,...

  6. Bayesian Methods for Radiation Detection and Dosimetry

    International Nuclear Information System (INIS)

    Peter G. Groer

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model

  7. Prior approval: the growth of Bayesian methods in psychology.

    Science.gov (United States)

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  8. Partitions

    Directory of Open Access Journals (Sweden)

    Renata Biadacz

    2015-04-01

    Full Text Available The aim of the article is to present scientific and didactic achievements of Polish accounting at the turn of the 19th century. The first part of the study addresses key issues in the development of accounting science and practice on Polish territory during the Partitions period. In the second part, attention is focused on scientific achievements of J. Walicki, B. Wilmowski and P. Ciompa. The third part discusses the question of Polish accounting handbooks at the turn of the 19th century, as well as the state of accounting practice at that time and problems in the development of professional periodicals on Polish territory during the Partitions. The last part focuses on various perspectives on the nature of accounting in handbooks from the late 19th and early 20th century. The research method applied is literature study based on selected textbooks and scientific papers published in Polish language in the late 19th and early 20th centuries. The article provides a synthetic overview of the state of accounting knowledge and professional accounting periodicals on Polish soil during the Partitions, which had an impact on further development of accounting in Poland.

  9. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Science.gov (United States)

    Miroshnikov, Alexey; Conlon, Erin M

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  10. Handbook of Bayesian reliability estimation methods

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waller, R.A.

    1976-11-01

    Bayesian reliability estimation methods are summarized in a handbook format for convenient use by reliability practitioners. The methods given consider both attribute test data based on a binomial sampling distribution and a beta prior, as well as variables test data from an exponential sampling distribution and a gamma prior. Classical, Bayes, and empirical Bayes methods are all considered. In addition, the sample test data can arise from either an item-censored life test, either with or without the replacement of failed items as they occur, or from a time-truncated life test with replacement. Real-data examples using nuclear reactor component failure data are used to illustrate each of the methods presented

  11. Bayesian statistics and Monte Carlo methods

    Science.gov (United States)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  12. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Directory of Open Access Journals (Sweden)

    Alexander Tilley

    Full Text Available The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15N and (13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15N values and greater δ(13C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15N ≈ 2.7‰ and Δ(13C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions.

  13. A conjugate gradient method for the spectral partitioning of graphs

    NARCIS (Netherlands)

    Kruyt, Nicolaas P.

    1997-01-01

    The partitioning of graphs is a frequently occurring problem in science and engineering. The spectral graph partitioning method is a promising heuristic method for this class of problems. Its main disadvantage is the large computing time required to solve a special eigenproblem. Here a simple and

  14. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  15. Numerical Methods for Bayesian Inverse Problems

    KAUST Repository

    Ernst, Oliver

    2014-01-06

    We present recent results on Bayesian inversion for a groundwater flow problem with an uncertain conductivity field. In particular, we show how direct and indirect measurements can be used to obtain a stochastic model for the unknown. The main tool here is Bayes’ theorem which merges the indirect data with the stochastic prior model for the conductivity field obtained by the direct measurements. Further, we demonstrate how the resulting posterior distribution of the quantity of interest, in this case travel times of radionuclide contaminants, can be obtained by Markov Chain Monte Carlo (MCMC) simulations. Moreover, we investigate new, promising MCMC methods which exploit geometrical features of the posterior and which are suited to infinite dimensions.

  16. Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation

    DEFF Research Database (Denmark)

    Brouwer, Thomas; Frellsen, Jes; Liò, Pietro

    2017-01-01

    In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri...

  17. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2000-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures,

  18. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  19. Gait Partitioning Methods: A Systematic Review

    Science.gov (United States)

    Taborri, Juri; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo

    2016-01-01

    In the last years, gait phase partitioning has come to be a challenging research topic due to its impact on several applications related to gait technologies. A variety of sensors can be used to feed algorithms for gait phase partitioning, mainly classifiable as wearable or non-wearable. Among wearable sensors, footswitches or foot pressure insoles are generally considered as the gold standard; however, to overcome some inherent limitations of the former, inertial measurement units have become popular in recent decades. Valuable results have been achieved also though electromyography, electroneurography, and ultrasonic sensors. Non-wearable sensors, such as opto-electronic systems along with force platforms, remain the most accurate system to perform gait analysis in an indoor environment. In the present paper we identify, select, and categorize the available methodologies for gait phase detection, analyzing advantages and disadvantages of each solution. Finally, we comparatively examine the obtainable gait phase granularities, the usable computational methodologies and the optimal sensor placements on the targeted body segments. PMID:26751449

  20. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  1. Advanced Bayesian Methods for Lunar Surface Navigation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...

  2. Advanced Bayesian Methods for Lunar Surface Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...

  3. Approximation methods for the partition functions of anharmonic systems

    Energy Technology Data Exchange (ETDEWEB)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations.

  4. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders...... on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame...... the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  5. The bootstrap and Bayesian bootstrap method in assessing bioequivalence

    International Nuclear Information System (INIS)

    Wan Jianping; Zhang Kongsheng; Chen Hui

    2009-01-01

    Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.

  6. Rationalizing method of replacement intervals by using Bayesian statistics

    International Nuclear Information System (INIS)

    Kasai, Masao; Notoya, Junichi; Kusakari, Yoshiyuki

    2007-01-01

    This study represents the formulations for rationalizing the replacement intervals of equipments and/or parts taking into account the probability density functions (PDF) of the parameters of failure distribution functions (FDF) and compares the optimized intervals by our formulations with those by conventional formulations which uses only representative values of the parameters of FDF instead of using these PDFs. The failure data are generated by Monte Carlo simulations since the real failure data can not be available for us. The PDF of PDF parameters are obtained by Bayesian method and the representative values are obtained by likelihood estimation and Bayesian method. We found that the method using PDF by Bayesian method brings longer replacement intervals than one using the representative of the parameters. (author)

  7. Numerical methods for Bayesian inference in the face of aging

    International Nuclear Information System (INIS)

    Clarotti, C.A.; Villain, B.; Procaccia, H.

    1996-01-01

    In recent years, much attention has been paid to Bayesian methods for Risk Assessment. Until now, these methods have been studied from a theoretical point of view. Researchers have been mainly interested in: studying the effectiveness of Bayesian methods in handling rare events; debating about the problem of priors and other philosophical issues. An aspect central to the Bayesian approach is numerical computation because any safety/reliability problem, in a Bayesian frame, ends with a problem of numerical integration. This aspect has been neglected until now because most Risk studies assumed the Exponential model as the basic probabilistic model. The existence of conjugate priors makes numerical integration unnecessary in this case. If aging is to be taken into account, no conjugate family is available and the use of numerical integration becomes compulsory. EDF (National Board of Electricity, of France) and ENEA (National Committee for Energy, New Technologies and Environment, of Italy) jointly carried out a research program aimed at developing quadrature methods suitable for Bayesian Interference with underlying Weibull or gamma distributions. The paper will illustrate the main results achieved during the above research program and will discuss, via some sample cases, the performances of the numerical algorithms which on the appearance of stress corrosion cracking in the tubes of Steam Generators of PWR French power plants. (authors)

  8. A Bayesian method for construction of Markov models to describe dynamics on various time-scales.

    Science.gov (United States)

    Rains, Emily K; Andersen, Hans C

    2010-10-14

    The dynamics of many biological processes of interest, such as the folding of a protein, are slow and complicated enough that a single molecular dynamics simulation trajectory of the entire process is difficult to obtain in any reasonable amount of time. Moreover, one such simulation may not be sufficient to develop an understanding of the mechanism of the process, and multiple simulations may be necessary. One approach to circumvent this computational barrier is the use of Markov state models. These models are useful because they can be constructed using data from a large number of shorter simulations instead of a single long simulation. This paper presents a new Bayesian method for the construction of Markov models from simulation data. A Markov model is specified by (τ,P,T), where τ is the mesoscopic time step, P is a partition of configuration space into mesostates, and T is an N(P)×N(P) transition rate matrix for transitions between the mesostates in one mesoscopic time step, where N(P) is the number of mesostates in P. The method presented here is different from previous Bayesian methods in several ways. (1) The method uses Bayesian analysis to determine the partition as well as the transition probabilities. (2) The method allows the construction of a Markov model for any chosen mesoscopic time-scale τ. (3) It constructs Markov models for which the diagonal elements of T are all equal to or greater than 0.5. Such a model will be called a "consistent mesoscopic Markov model" (CMMM). Such models have important advantages for providing an understanding of the dynamics on a mesoscopic time-scale. The Bayesian method uses simulation data to find a posterior probability distribution for (P,T) for any chosen τ. This distribution can be regarded as the Bayesian probability that the kinetics observed in the atomistic simulation data on the mesoscopic time-scale τ was generated by the CMMM specified by (P,T). An optimization algorithm is used to find the most

  9. Stochastic back analysis of permeability coefficient using generalized Bayesian method

    Directory of Open Access Journals (Sweden)

    Zheng Guilan

    2008-09-01

    Full Text Available Owing to the fact that the conventional deterministic back analysis of the permeability coefficient cannot reflect the uncertainties of parameters, including the hydraulic head at the boundary, the permeability coefficient and measured hydraulic head, a stochastic back analysis taking consideration of uncertainties of parameters was performed using the generalized Bayesian method. Based on the stochastic finite element method (SFEM for a seepage field, the variable metric algorithm and the generalized Bayesian method, formulas for stochastic back analysis of the permeability coefficient were derived. A case study of seepage analysis of a sluice foundation was performed to illustrate the proposed method. The results indicate that, with the generalized Bayesian method that considers the uncertainties of measured hydraulic head, the permeability coefficient and the hydraulic head at the boundary, both the mean and standard deviation of the permeability coefficient can be obtained and the standard deviation is less than that obtained by the conventional Bayesian method. Therefore, the present method is valid and applicable.

  10. Bayesian methods to estimate urban growth potential

    Science.gov (United States)

    Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.

    2017-01-01

    Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.

  11. Application of Bayesian Methods for Detecting Fraudulent Behavior on Tests

    Science.gov (United States)

    Sinharay, Sandip

    2018-01-01

    Producers and consumers of test scores are increasingly concerned about fraudulent behavior before and during the test. There exist several statistical or psychometric methods for detecting fraudulent behavior on tests. This paper provides a review of the Bayesian approaches among them. Four hitherto-unpublished real data examples are provided to…

  12. Development of Bayesian stock assessment methods for Namibian ...

    African Journals Online (AJOL)

    The application of Bayesian stock assessment methods in the management of Namibian orange roughy Hoplosthethus atlanticus within the 200 mile EEZ of Namibia is reviewed. Time-series of relative abundance are short and their reliability in indicating abundance trends is uncertain. The development of informative prior ...

  13. Bayesian methods in clinical trials: a Bayesian analysis of ECOG trials E1684 and E1690

    Directory of Open Access Journals (Sweden)

    Ibrahim Joseph G

    2012-11-01

    Full Text Available Abstract Background E1684 was the pivotal adjuvant melanoma trial for establishment of high-dose interferon (IFN as effective therapy of high-risk melanoma patients. E1690 was an intriguing effort to corroborate E1684, and the differences between the outcomes of these trials have embroiled the field in controversy over the past several years. The analyses of E1684 and E1690 were carried out separately when the results were published, and there were no further analyses trying to perform a single analysis of the combined trials. Method In this paper, we consider such a joint analysis by carrying out a Bayesian analysis of these two trials, thus providing us with a consistent and coherent methodology for combining the results from these two trials. Results The Bayesian analysis using power priors provided a more coherent flexible and potentially more accurate analysis than a separate analysis of these data or a frequentist analysis of these data. The methodology provides a consistent framework for carrying out a single unified analysis by combining data from two or more studies. Conclusions Such Bayesian analyses can be crucial in situations where the results from two theoretically identical trials yield somewhat conflicting or inconsistent results.

  14. The Bayesian Power Imaging (BPI) method for magnetic source imaging

    OpenAIRE

    Hasson, R.; Swithenby, S. J.

    2001-01-01

    In the biomagnetic inverse problem the main interest is the activation of a region of interest, i.e. the power dissipated in that region. The Bayesian power imaging method (BPI) provides a quantified probability that the activation of a region of interest is above a given threshold. This paper introduces the method and derives the equations used. The method is illustrated in this paper using both experimental and simulated data.

  15. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  16. Application of an efficient Bayesian discretization method to biomedical data

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2011-07-01

    Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.

  17. A Bayesian statistical method for particle identification in shower counters

    International Nuclear Information System (INIS)

    Takashimizu, N.; Kimura, A.; Shibata, A.; Sasaki, T.

    2004-01-01

    We report an attempt on identifying particles using a Bayesian statistical method. We have developed the mathematical model and software for this purpose. We tried to identify electrons and charged pions in shower counters using this method. We designed an ideal shower counter and studied the efficiency of identification using Monte Carlo simulation based on Geant4. Without having any other information, e.g. charges of particles which are given by tracking detectors, we have achieved 95% identifications of both particles

  18. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    Science.gov (United States)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  19. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring

    of different power parameters had no effect on predictive ability. Partitioning of genomic variance showed that SNP groups either by position (intron, exon, downstream, upstream and 5’UTR) or by function (missense and protein-altering) had similar average explained variance per SNP, except that 3’UTR had...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL models had similar accuracy and bias as GBLUP method but use...

  20. Predicting land cover using GIS, Bayesian and evolutionary algorithm methods.

    Science.gov (United States)

    Aitkenhead, M J; Aalders, I H

    2009-01-01

    Modelling land cover change from existing land cover maps is a vital requirement for anyone wishing to understand how the landscape may change in the future. In order to test any land cover change model, existing data must be used. However, often it is not known which data should be applied to the problem, or whether relationships exist within and between complex datasets. Here we have developed and tested a model that applied evolutionary processes to Bayesian networks. The model was developed and tested on a dataset containing land cover information and environmental data, in order to show that decisions about which datasets should be used could be made automatically. Bayesian networks are amenable to evolutionary methods as they can be easily described using a binary string to which crossover and mutation operations can be applied. The method, developed to allow comparison with standard Bayesian network development software, was proved capable of carrying out a rapid and effective search of the space of possible networks in order to find an optimal or near-optimal solution for the selection of datasets that have causal links with one another. Comparison of land cover mapping in the North-East of Scotland was made with a commercial Bayesian software package, with the evolutionary method being shown to provide greater flexibility in its ability to adapt to incorporate/utilise available evidence/knowledge and develop effective and accurate network structures, at the cost of requiring additional computer programming skills. The dataset used to develop the models included GIS-based data taken from the Land Cover for Scotland 1988 (LCS88), Land Capability for Forestry (LCF), Land Capability for Agriculture (LCA), the soil map of Scotland and additional climatic variables.

  1. Binary recursive partitioning: background, methods, and application to psychology.

    Science.gov (United States)

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  2. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  3. A Bayesian method for assessing multiscalespecies-habitat relationships

    Science.gov (United States)

    Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.

    2017-01-01

    ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and

  4. Partition wall structure in spent fuel storage pool and construction method for the partition wall

    International Nuclear Information System (INIS)

    Izawa, Masaaki

    1998-01-01

    A partitioning wall for forming cask pits as radiation shielding regions by partitioning inside of a spent fuel storage pool is prepared by covering both surface of a concrete body by shielding metal plates. The metal plate comprises opposed plate units integrated by welding while sandwiching a metal frame as a reinforcing material for the concrete body, the lower end of the units is connected to a floor of a pool by fastening members, and concrete is set while using the metal plate of the units as a frame to form the concrete body. The shielding metal plate has a double walled structure formed by welding a lining plate disposed on the outer surface of the partition wall and a shield plate disposed to the inner side. Then the term for construction can be shortened, and the capacity for storing spent fuels can be increased. (N.H.)

  5. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  6. Quantum mechanical fragment methods based on partitioning atoms or partitioning coordinates.

    Science.gov (United States)

    Wang, Bo; Yang, Ke R; Xu, Xuefei; Isegawa, Miho; Leverentz, Hannah R; Truhlar, Donald G

    2014-09-16

    atoms for capping dangling bonds, and we have shown that they can greatly improve the accuracy. Finally we present a new approach that goes beyond QM/MM by combining the convenience of molecular mechanics with the accuracy of fitting a potential function to electronic structure calculations on a specific system. To make the latter practical for systems with a large number of degrees of freedom, we developed a method to interpolate between local internal-coordinate fits to the potential energy. A key issue for the application to large systems is that rather than assigning the atoms or monomers to fragments, we assign the internal coordinates to reaction, secondary, and tertiary sets. Thus, we make a partition in coordinate space rather than atom space. Fits to the local dependence of the potential energy on tertiary coordinates are arrayed along a preselected reaction coordinate at a sequence of geometries called anchor points; the potential energy function is called an anchor points reactive potential. Electrostatically embedded fragment methods and the anchor points reactive potential, because they are based on treating an entire system by quantum mechanical electronic structure methods but are affordable for large and complex systems, have the potential to open new areas for accurate simulations where combined QM/MM methods are inadequate.

  7. ObStruct: A Method to Objectively Analyse Factors Driving Population Structure Using Bayesian Ancestry Profiles

    Science.gov (United States)

    Gayevskiy, Velimir; Klaere, Steffen; Knight, Sarah; Goddard, Matthew R.

    2014-01-01

    Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin) correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses. PMID:24416362

  8. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  9. An approximate method for Bayesian entropy estimation for a discrete random variable.

    Science.gov (United States)

    Yokota, Yasunari

    2004-01-01

    This article proposes an approximated Bayesian entropy estimator for a discrete random variable. An entropy estimator that achieves least square error is obtained through Bayesian estimation of the occurrence probabilities of each value taken by the discrete random variable. This Bayesian entropy estimator requires large amount of calculation cost if the random variable takes numerous sorts of values. Therefore, the present article proposes a practical method for calculating an Bayesian entropy estimate; the proposed method utilizes approximation of the entropy function by a truncated Taylor series. Numerical experiments demonstrate that the proposed entropy estimation method improves estimation precision of entropy remarkably in comparison to the conventional entropy estimation method.

  10. Simple optimization method for partitioning purification of hydrogen networks

    Directory of Open Access Journals (Sweden)

    W.M. Shehata

    2015-03-01

    Full Text Available The Egyptian petroleum fuel market is increasing rapidly nowadays. These fuels must be in the standard specifications of the Egyptian General Petroleum Corporation (EGPC, which required lower sulfur gasoline and diesel fuels. So the fuels must be deep hydrotreated which resulted in increasing hydrogen (H2 consumption for deeper hydrotreating. Along with increased H2 consumption for deeper hydrotreating, additional H2 is needed for processing heavier and higher sulfur crude slates especially in hydrocracking process, in addition to hydrotreating unit, isomerization units and lubricant plants. Purification technology is used to increase the amount of recycled hydrogen. If the amount of recycled hydrogen is increased, the amount of hydrogen that is sent to the furnaces with the off gas will decrease. In this work, El Halwagi et al. (2003 and El Halwagi (2012 optimization methods which are used for recycle/reuse integration systems have been extended to be used in the partitioning purification of hydrogen networks to minimize the hydrogen consumption and the hydrogen discharge. An actual case study and two case studies from the literature are solved to illustrate the proposed method.

  11. A dynamic discretization method for reliability inference in Dynamic Bayesian Networks

    International Nuclear Information System (INIS)

    Zhu, Jiandao; Collette, Matthew

    2015-01-01

    The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem. - Highlights: • A dynamic discretization method is developed for low-probability events in DBNs. • The method is compared to existing approaches on two crack growth problems. • The method is shown to improve on existing methods for low-probability events

  12. A Bayesian method for detecting pairwise associations in compositional data.

    Directory of Open Access Journals (Sweden)

    Emma Schwager

    2017-11-01

    Full Text Available Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats.

  13. A Bayesian method for detecting pairwise associations in compositional data.

    Science.gov (United States)

    Schwager, Emma; Mallick, Himel; Ventz, Steffen; Huttenhower, Curtis

    2017-11-01

    Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance) to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats.

  14. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  15. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  16. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    Science.gov (United States)

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  17. THz-SAR Vibrating Target Imaging via the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Bin Deng

    2017-01-01

    Full Text Available Target vibration bears important information for target recognition, and terahertz, due to significant micro-Doppler effects, has strong advantages for remotely sensing vibrations. In this paper, the imaging characteristics of vibrating targets with THz-SAR are at first analyzed. An improved algorithm based on an excellent Bayesian approach, that is, the expansion-compression variance-component (ExCoV method, has been proposed for reconstructing scattering coefficients of vibrating targets, which provides more robust and efficient initialization and overcomes the deficiencies of sidelobes as well as artifacts arising from the traditional correlation method. A real vibration measurement experiment of idle cars was performed to validate the range model. Simulated SAR data of vibrating targets and a tank model in a real background in 220 GHz show good performance at low SNR. Rapidly evolving high-power terahertz devices will offer viable THz-SAR application at a distance of several kilometers.

  18. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  19. Axiomatic method of partitions in the theory of Noebeling spaces. I. Improvement of partition connectivity

    International Nuclear Information System (INIS)

    Ageev, S M

    2007-01-01

    The Noebeling space N k 2k+1 , a k-dimensional analogue of the Hilbert space, is considered; this is a topologically complete separable (that is, Polish) k-dimensional absolute extensor in dimension k (that is, AE(k)) and a strongly k-universal space. The conjecture that the above-listed properties characterize the Noebeling space N k 2k+1 in an arbitrary finite dimension k is proved. In the first part of the paper a full axiom system of the Noebeling spaces is presented and the problem of the improvement of a partition connectivity is solved on its basis. Bibliography: 29 titles.

  20. The characterization of petroleum contamination in heterogenous media using partitioning tracer method

    International Nuclear Information System (INIS)

    Kim, E.; Rhee, S.; Park, J.

    2009-01-01

    A partitioning tracer method for characterizing petroleum contamination in heterogenous media was discussed. The average saturation level of nonaqueous phase liquids (NAPLs) was calculated by comparing the transport of the partitioning tracers to a conservative tracer. The NAPL saturation level represented a continuous value throughout the contaminated site. Experiments were conducted in a 2-D sandbox divided into 4 parts using different-sized sands. Soils were contaminated with a mixture of kerosene and diesel. Partitioning tracer tests were conducted both before and after contamination. A partitioning batch test was conducted to determine the partition coefficient (K) of the tracer between the NAPL and water. Breakthrough curves were obtained, and a retardation factor (R) was calculated. Results of the study showed that the calculated NAPL saturation was in good agreement with determined values. It was concluded that the partitioning tracer test is an accurate method of locating and quantifying NAPLs

  1. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  2. Bayesian signal processing classical, modern, and particle filtering methods

    CERN Document Server

    Candy, James V

    2016-01-01

    This book aims to give readers a unified Bayesian treatment starting from the basics (Baye's rule) to the more advanced (Monte Carlo sampling), evolving to the next-generation model-based techniques (sequential Monte Carlo sampling). This next edition incorporates a new chapter on "Sequential Bayesian Detection," a new section on "Ensemble Kalman Filters" as well as an expansion of Case Studies that detail Bayesian solutions for a variety of applications. These studies illustrate Bayesian approaches to real-world problems incorporating detailed particle filter designs, adaptive particle filters and sequential Bayesian detectors. In addition to these major developments a variety of sections are expanded to "fill-in-the gaps" of the first edition. Here metrics for particle filter (PF) designs with emphasis on classical "sanity testing" lead to ensemble techniques as a basic requirement for performance analysis. The expansion of information theory metrics and their application to PF designs is fully developed an...

  3. A general method to study equilibrium partitioning of macromolecules

    DEFF Research Database (Denmark)

    the equilibrium partition coefficient (pore-to-bulk concentration ratio) and the concentration profile inside the confining geometry. The algorithm involves two steps. First, certain characteristic structure properties of the studied macromolecule are obtained by sampling its configuration space, and second those...

  4. Approximate Bayesian computation (ABC) coupled with Bayesian model averaging method for estimating mean and standard deviation

    OpenAIRE

    Kwon, Deukwoo; Reis, Isildinha M.

    2016-01-01

    Background: We proposed approximate Bayesian computation with single distribution selection (ABC-SD) for estimating mean and standard deviation from other reported summary statistics. The ABC-SD generates pseudo data from a single parametric distribution thought to be the true distribution of underlying study data. This single distribution is either an educated guess, or it is selected via model selection using posterior probability criterion for testing two or more candidate distributions. F...

  5. Data partitions, Bayesian analysis and phylogeny of the zygomycetous fungal family Mortierellaceae, inferred from nuclear ribosomal DNA sequences.

    Directory of Open Access Journals (Sweden)

    Tamás Petkovits

    Full Text Available Although the fungal order Mortierellales constitutes one of the largest classical groups of Zygomycota, its phylogeny is poorly understood and no modern taxonomic revision is currently available. In the present study, 90 type and reference strains were used to infer a comprehensive phylogeny of Mortierellales from the sequence data of the complete ITS region and the LSU and SSU genes with a special attention to the monophyly of the genus Mortierella. Out of 15 alternative partitioning strategies compared on the basis of Bayes factors, the one with the highest number of partitions was found optimal (with mixture models yielding the best likelihood and tree length values, implying a higher complexity of evolutionary patterns in the ribosomal genes than generally recognized. Modeling the ITS1, 5.8S, and ITS2, loci separately improved model fit significantly as compared to treating all as one and the same partition. Further, within-partition mixture models suggests that not only the SSU, LSU and ITS regions evolve under qualitatively and/or quantitatively different constraints, but that significant heterogeneity can be found within these loci also. The phylogenetic analysis indicated that the genus Mortierella is paraphyletic with respect to the genera Dissophora, Gamsiella and Lobosporangium and the resulting phylogeny contradict previous, morphology-based sectional classification of Mortierella. Based on tree structure and phenotypic traits, we recognize 12 major clades, for which we attempt to summarize phenotypic similarities. M. longicollis is closely related to the outgroup taxon Rhizopus oryzae, suggesting that it belongs to the Mucorales. Our results demonstrate that traits used in previous classifications of the Mortierellales are highly homoplastic and that the Mortierellales is in a need of a reclassification, where new, phylogenetically informative phenotypic traits should be identified, with molecular phylogenies playing a decisive role.

  6. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  7. A Bayesian method for inferring quantitative information from FRET data

    Directory of Open Access Journals (Sweden)

    Lichten Catherine A

    2011-05-01

    Full Text Available Abstract Background Understanding biological networks requires identifying their elementary protein interactions and establishing the timing and strength of those interactions. Fluorescence microscopy and Förster resonance energy transfer (FRET have the potential to reveal such information because they allow molecular interactions to be monitored in living cells, but it is unclear how best to analyze FRET data. Existing techniques differ in assumptions, manipulations of data and the quantities they derive. To address this variation, we have developed a versatile Bayesian analysis based on clear assumptions and systematic statistics. Results Our algorithm infers values of the FRET efficiency and dissociation constant, Kd, between a pair of fluorescently tagged proteins. It gives a posterior probability distribution for these parameters, conveying more extensive information than single-value estimates can. The width and shape of the distribution reflects the reliability of the estimate and we used simulated data to determine how measurement noise, data quantity and fluorophore concentrations affect the inference. We are able to show why varying concentrations of donors and acceptors is necessary for estimating Kd. We further demonstrate that the inference improves if additional knowledge is available, for example of the FRET efficiency, which could be obtained from separate fluorescence lifetime measurements. Conclusions We present a general, systematic approach for extracting quantitative information on molecular interactions from FRET data. Our method yields both an estimate of the dissociation constant and the uncertainty associated with that estimate. The information produced by our algorithm can help design optimal experiments and is fundamental for developing mathematical models of biochemical networks.

  8. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  9. Bayesian specification analysis and estimation of simultaneous equation models using Monte Carlo methods

    NARCIS (Netherlands)

    A. Zellner (Arnold); L. Bauwens (Luc); H.K. van Dijk (Herman)

    1988-01-01

    textabstractBayesian procedures for specification analysis or diagnostic checking of modeling assumptions for structural equations of econometric models are developed and applied using Monte Carlo numerical methods. Checks on the validity of identifying restrictions, exogeneity assumptions and other

  10. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  11. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  12. PREDICTION OF OCTANOL/WATER PARTITION COEFFICIENT OF SELECTED FERROCENE DERIVATIVES USING REKKER METHOD

    Directory of Open Access Journals (Sweden)

    R. Ahmedi

    2015-07-01

    Full Text Available In this work we present a theoretical approach for the determination of octanol/water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ferrocene was confirmed by comparison with known experimental values obtained mainly from literature. The results obtained show that calculated partition coefficients are in good agreement with experimental values. For estimation of the octanol/water partition coefficients of the selected compounds, the average absolute error of log P is 0.13, and The correlation coefficient is  R2 = 0.966.

  13. Bayesian Methods for Predicting the Shape of Chinese Yam in Terms of Key Diameters

    Directory of Open Access Journals (Sweden)

    Mitsunori Kayano

    2017-01-01

    Full Text Available This paper proposes Bayesian methods for the shape estimation of Chinese yam (Dioscorea opposita using a few key diameters of yam. Shape prediction of yam is applicable to determining optimal cutoff positions of a yam for producing seed yams. Our Bayesian method, which is a combination of Bayesian estimation model and predictive model, enables automatic, rapid, and low-cost processing of yam. After the construction of the proposed models using a sample data set in Japan, the models provide whole shape prediction of yam based on only a few key diameters. The Bayesian method performed well on the shape prediction in terms of minimizing the mean squared error between measured shape and the prediction. In particular, a multiple regression method with key diameters at two fixed positions attained the highest performance for shape prediction. We have developed automatic, rapid, and low-cost yam-processing machines based on the Bayesian estimation model and predictive model. Development of such shape prediction approaches, including our Bayesian method, can be a valuable aid in reducing the cost and time in food processing.

  14. Pixel partition method using Markov random field for measurements of closely spaced objects by optical sensors

    Science.gov (United States)

    Wang, Xueying; Li, Jun; Sheng, Weidong; An, Wei; Du, Qinfeng

    2015-10-01

    ABSTRACT In Space-based optical system, during the tracking for closely spaced objects (CSOs), the traditional method with a constant false alarm rate(CFAR) detecting brings either more clutter measurements or the loss of target information. CSOs can be tracked as Extended targets because their features on optical sensor's pixel-plane. A pixel partition method under the framework of Markov random field(MRF) is proposed, simulation results indicate: the method proposed provides higher pixel partition performance than traditional method, especially when the signal-noise-rate is poor.

  15. Review of bayesian statistical analysis methods for cytogenetic radiation biodosimetry, with a practical example

    International Nuclear Information System (INIS)

    Ainsbury, Elizabeth A.; Lloyd, David C.; Rothkamm, Kai; Vinnikov, Volodymyr A.; Maznyk, Nataliya A.; Puig, Pedro; Higueras, Manuel

    2014-01-01

    Classical methods of assessing the uncertainty associated with radiation doses estimated using cytogenetic techniques are now extremely well defined. However, several authors have suggested that a Bayesian approach to uncertainty estimation may be more suitable for cytogenetic data, which are inherently stochastic in nature. The Bayesian analysis framework focuses on identification of probability distributions (for yield of aberrations or estimated dose), which also means that uncertainty is an intrinsic part of the analysis, rather than an 'afterthought'. In this paper Bayesian, as well as some more advanced classical, data analysis methods for radiation cytogenetics are reviewed that have been proposed in the literature. A practical overview of Bayesian cytogenetic dose estimation is also presented, with worked examples from the literature. (authors)

  16. Evidence reasoning method for constructing conditional probability tables in a Bayesian network of multimorbidity.

    Science.gov (United States)

    Du, Yuanwei; Guo, Yubin

    2015-01-01

    The intrinsic mechanism of multimorbidity is difficult to recognize and prediction and diagnosis are difficult to carry out accordingly. Bayesian networks can help to diagnose multimorbidity in health care, but it is difficult to obtain the conditional probability table (CPT) because of the lack of clinically statistical data. Today, expert knowledge and experience are increasingly used in training Bayesian networks in order to help predict or diagnose diseases, but the CPT in Bayesian networks is usually irrational or ineffective for ignoring realistic constraints especially in multimorbidity. In order to solve these problems, an evidence reasoning (ER) approach is employed to extract and fuse inference data from experts using a belief distribution and recursive ER algorithm, based on which evidence reasoning method for constructing conditional probability tables in Bayesian network of multimorbidity is presented step by step. A multimorbidity numerical example is used to demonstrate the method and prove its feasibility and application. Bayesian network can be determined as long as the inference assessment is inferred by each expert according to his/her knowledge or experience. Our method is more effective than existing methods for extracting expert inference data accurately and is fused effectively for constructing CPTs in a Bayesian network of multimorbidity.

  17. An automated and objective method for age partitioning of reference intervals based on continuous centile curves.

    Science.gov (United States)

    Yang, Qian; Lew, Hwee Yeong; Peh, Raymond Hock Huat; Metz, Michael Patrick; Loh, Tze Ping

    2016-10-01

    Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  18. Efficient methods for finding transition states in chemical reactions: comparison of improved dimer method and partitioned rational function optimization method.

    Science.gov (United States)

    Heyden, Andreas; Bell, Alexis T; Keil, Frerich J

    2005-12-08

    A combination of interpolation methods and local saddle-point search algorithms is probably the most efficient way of finding transition states in chemical reactions. Interpolation methods such as the growing-string method and the nudged-elastic band are able to find an approximation to the minimum-energy pathway and thereby provide a good initial guess for a transition state and imaginary mode connecting both reactant and product states. Since interpolation methods employ usually just a small number of configurations and converge slowly close to the minimum-energy pathway, local methods such as partitioned rational function optimization methods using either exact or approximate Hessians or minimum-mode-following methods such as the dimer or the Lanczos method have to be used to converge to the transition state. A modification to the original dimer method proposed by [Henkelman and Jonnson J. Chem. Phys. 111, 7010 (1999)] is presented, reducing the number of gradient calculations per cycle from six to four gradients or three gradients and one energy, and significantly improves the overall performance of the algorithm on quantum-chemical potential-energy surfaces, where forces are subject to numerical noise. A comparison is made between the dimer methods and the well-established partitioned rational function optimization methods for finding transition states after the use of interpolation methods. Results for 24 different small- to medium-sized chemical reactions covering a wide range of structural types demonstrate that the improved dimer method is an efficient alternative saddle-point search algorithm on medium-sized to large systems and is often even able to find transition states when partitioned rational function optimization methods fail to converge.

  19. Analyzing bioassay data using Bayesian methods-A primer

    International Nuclear Information System (INIS)

    Miller, G.; Inkret, W.C.; Schillaci, M.E.

    1997-01-01

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level k α -L c /σ 0 , where σ 0 is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times σ 0 rather than approximately two times σ 0 , as in classical statistics, would often seem a better choice for the decision level

  20. Bayesian hypothesis testing for psychologists: a tutorial on the Savage-Dickey method.

    Science.gov (United States)

    Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul

    2010-05-01

    In the field of cognitive psychology, the p-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the p-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is conveyed by a Bayesian hypothesis test, which prefers the model with the highest average likelihood. One of the main problems with this Bayesian hypothesis test, however, is that it often requires relatively sophisticated numerical methods for its computation. Here we draw attention to the Savage-Dickey density ratio method, a method that can be used to compute the result of a Bayesian hypothesis test for nested models and under certain plausible restrictions on the parameter priors. Practical examples demonstrate the method's validity, generality, and flexibility. Copyright 2009 Elsevier Inc. All rights reserved.

  1. Bayesian data augmentation methods for the synthesis of qualitative and quantitative research findings

    Science.gov (United States)

    Crandell, Jamie L.; Voils, Corrine I.; Chang, YunKyung; Sandelowski, Margarete

    2010-01-01

    The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missing data and chose a Bayesian data augmentation method. We were able to summarize, rank, and compare the effects of each of the 10 factors on medication adherence. This is a promising methodological development in the synthesis of qualitative and quantitative research. PMID:21572970

  2. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP...... is formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...

  3. A passive dosing method to determine fugacity capacities and partitioning properties of leaves

    DEFF Research Database (Denmark)

    Bolinius, Damien Johann; Macleod, Matthew; McLachlan, Michael S.

    2016-01-01

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (Kleaf/PDMS) for intact leaves, using...... polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium Kleaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.......075 (PCB 180) to 0.371 (PCB 3). The Kleaf/PDMS data can readily be converted to fugacity capacities of leaves (Zleaf) and subsequently leaf/water or leaf/air partition ratios (Kleaf/water and Kleaf/air) using partitioning data from the literature. Results of our measurements are within the variability...

  4. Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow

    Directory of Open Access Journals (Sweden)

    Yaseen A. Hamaamin

    2016-07-01

    Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.

  5. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo

  6. Validation of techniques for the prediction of carboplatin exposure: application of Bayesian methods

    NARCIS (Netherlands)

    Huitema, A. D.; Mathôt, R. A.; Tibben, M. M.; Schellens, J. H.; Rodenhuis, S.; Beijnen, J. H.

    2000-01-01

    Several methods have been developed for the prediction of carboplatin exposure to facilitate pharmacokinetic guided dosing. The aim of this study was to develop and validate sparse data Bayesian methods for the estimation of carboplatin exposure and to validate other commonly applied techniques,

  7. Method for chemical amplification based on fluid partitioning in an immiscible liquid

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    2017-02-28

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  8. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    Science.gov (United States)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  9. Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method

    Science.gov (United States)

    Zhang, Jianguo

    2013-01-01

    Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198

  10. Tree biomass estimation of Chinese fir (Cunninghamia lanceolata based on Bayesian method.

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    Full Text Available Chinese fir (Cunninghamia lanceolata (Lamb. Hook. is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2Hb was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.

  11. A passive dosing method to determine fugacity capacities and partitioning properties of leaves.

    Science.gov (United States)

    Bolinius, Damien Johann; MacLeod, Matthew; McLachlan, Michael S; Mayer, Philipp; Jahnke, Annika

    2016-10-12

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (K leaf/PDMS ) for intact leaves, using polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium K leaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.075 (PCB 180) to 0.371 (PCB 3). The K leaf/PDMS data can readily be converted to fugacity capacities of leaves (Z leaf ) and subsequently leaf/water or leaf/air partition ratios (K leaf/water and K leaf/air ) using partitioning data from the literature. Results of our measurements are within the variability observed for plant/air partition ratios (K plant/air ) found in the literature. Log K leaf/air from this study ranged from 5.00 (PCB 3) to 8.30 (PCB 180) compared to log K plant/air of 3.31 (PCB 3) to 8.88 (PCB 180) found in the literature. The method we describe could provide data to characterize the variability in sorptive capacities of leaves that would improve descriptions of uptake of chemicals by leaves in multimedia fate models.

  12. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed...

  13. A Bayesian Method for Studying DIF: A Cautionary Tale Filled with Surprises and Delights

    Science.gov (United States)

    Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard; Muller, Eric S.

    2008-01-01

    In the course of screening a form of a medical licensing exam for items that function differentially (DIF) between men and women, the authors used the traditional Mantel-Haenszel (MH) statistic for initial screening and a Bayesian method for deeper analysis. For very easy items, the MH statistic unexpectedly often found DIF where there was none.…

  14. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  15. A Multilevel Bayesian Item Response Theory Method for Scaling Socioeconomic Status in International Studies of Education

    Science.gov (United States)

    May, Henry

    2006-01-01

    In this article, a new method is presented and implemented for deriving a scale of socioeconomic status (SES) from international survey data using a multilevel Bayesian item response theory (IRT) model. The proposed model incorporates both international anchor items and nation-specific items and is able to (a) produce student family SES scores…

  16. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    Science.gov (United States)

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  17. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  18. Modelling access to renal transplantation waiting list in a French healthcare network using a Bayesian method.

    Science.gov (United States)

    Bayat, Sahar; Cuggia, Marc; Kessler, Michel; Briançon, Serge; Le Beux, Pierre; Frimat, Luc

    2008-01-01

    Evaluation of adult candidates for kidney transplantation diverges from one centre to another. Our purpose was to assess the suitability of Bayesian method for describing the factors associated to registration on the waiting list in a French healthcare network. We have found no published paper using Bayesian method in this domain. Eight hundred and nine patients starting renal replacement therapy were included in the analysis. The data were extracted from the information system of the healthcare network. We performed conventional statistical analysis and data mining analysis using mainly Bayesian networks. The Bayesian model showed that the probability of registration on the waiting list is associated to age, cardiovascular disease, diabetes, serum albumin level, respiratory disease, physical impairment, follow-up in the department performing transplantation and past history of malignancy. These results are similar to conventional statistical method. The comparison between conventional analysis and data mining analysis showed us the contribution of the data mining method for sorting variables and having a global view of the variables' associations. Moreover theses approaches constitute an essential step toward a decisional information system for healthcare networks.

  19. Comparison of Automated Continuous Flow Method With Shake- Flask Method in Determining Partition Coefficients of Bidentate Hydroxypyridinone Ligands

    Directory of Open Access Journals (Sweden)

    Lotfollah Saghaie

    2003-08-01

    Full Text Available The partition coefficients (Kpart , in octanol/water system of a range of bidentate ligands containing the 3-hydroxypyridin-4-one moiety were determined using shake flask and automated continuous flow methods (filter probe method. The shake flask method was used for extremely hydrophilic or hydrophobic compounds with a Kpart values greater than 100 and less than 0.01. For other ligands which possess moderate lipophilicity (Kpart values between 0.01-100 the filter probe method was used. Also the partition coefficient of four ligands with moderate lipophilicity was determined by shake flask method in order to check comparability of these two methods. While the shake flask method was able to determine either extremely hydrophilic or hydrophobic compounds efficiently, the filter probe method was unable to measure such Kpart values. Although, determination of the Kpart values of all compounds is possible with the classical shake-flask method, the procedure is time consuming. In contrast, the filter probe method offers many advantages over the traditional shake-flask method in terms of speed, efficiency of separation and degree of automation. The shake-flask method is the method of choice for determination of partition coefficients of extremely hydrophilic and hydrophobic ligands.

  20. Implementing and testing Bayesian and maximum-likelihood supertree methods in phylogenetics.

    Science.gov (United States)

    Akanni, Wasiu A; Wilkinson, Mark; Creevey, Christopher J; Foster, Peter G; Pisani, Davide

    2015-08-01

    Since their advent, supertrees have been increasingly used in large-scale evolutionary studies requiring a phylogenetic framework and substantial efforts have been devoted to developing a wide variety of supertree methods (SMs). Recent advances in supertree theory have allowed the implementation of maximum likelihood (ML) and Bayesian SMs, based on using an exponential distribution to model incongruence between input trees and the supertree. Such approaches are expected to have advantages over commonly used non-parametric SMs, e.g. matrix representation with parsimony (MRP). We investigated new implementations of ML and Bayesian SMs and compared these with some currently available alternative approaches. Comparisons include hypothetical examples previously used to investigate biases of SMs with respect to input tree shape and size, and empirical studies based either on trees harvested from the literature or on trees inferred from phylogenomic scale data. Our results provide no evidence of size or shape biases and demonstrate that the Bayesian method is a viable alternative to MRP and other non-parametric methods. Computation of input tree likelihoods allows the adoption of standard tests of tree topologies (e.g. the approximately unbiased test). The Bayesian approach is particularly useful in providing support values for supertree clades in the form of posterior probabilities.

  1. Comparing methods for partitioning a decade of carbon dioxide and water vapor fluxes in a temperate forest

    Science.gov (United States)

    Benjamin N. Sulman; Daniel Tyler Roman; Todd M. Scanlon; Lixin Wang; Kimberly A. Novick

    2016-01-01

    The eddy covariance (EC) method is routinely used to measure net ecosystem fluxes of carbon dioxide (CO2) and evapotranspiration (ET) in terrestrial ecosystems. It is often desirable to partition CO2 flux into gross primary production (GPP) and ecosystem respiration (RE), and to partition ET into evaporation and...

  2. A novel method to augment extraction of mangiferin by application of microwave on three phase partitioning

    Directory of Open Access Journals (Sweden)

    Vrushali M. Kulkarni

    2015-06-01

    Full Text Available This work reports a novel approach where three phase partitioning (TPP was combined with microwave for extraction of mangiferin from leaves of Mangifera indica. Soxhlet extraction was used as reference method, which yielded 57 mg/g in 5 h. Under optimal conditions such as microwave irradiation time 5 min, ammonium sulphate concentration 40% w/v, power 272 W, solute to solvent ratio 1:20, slurry to t-butanol ratio 1:1, soaking time 5 min and duty cycle 50%, the mangiferin yield obtained was 54 mg/g by microwave assisted three phase partitioning extraction (MTPP. Thus extraction method developed resulted into higher extraction yield in a shorter span, thereby making it an interesting alternative prior to down-stream processing.

  3. Research on Evaluation Method Based on Modified Buckley Decision Making and Bayesian Network

    Directory of Open Access Journals (Sweden)

    Neng-pu Yang

    2015-01-01

    Full Text Available This work presents a novel evaluation method, which can be applied in the field of risk assessment, project management, cause analysis, and so forth. Two core technologies are used in the method, namely, modified Buckley Decision Making and Bayesian Network. Based on the modified Buckley Decision Making, the fuzzy probabilities of element factors are calibrated. By the forward and backward calculation of Bayesian Network, the structure importance, probability importance, and criticality importance of each factor are calculated and discussed. A numerical example of risk evaluation for dangerous goods transport process is given to verify the method. The results indicate that the method can efficiently identify the weakest element factor. In addition, the method can improve the reliability and objectivity for evaluation.

  4. Bayesian Computation Methods for Inferring Regulatory Network Models Using Biomedical Data.

    Science.gov (United States)

    Tian, Tianhai

    2016-01-01

    The rapid advancement of high-throughput technologies provides huge amounts of information for gene expression and protein activity in the genome-wide scale. The availability of genomics, transcriptomics, proteomics, and metabolomics dataset gives an unprecedented opportunity to study detailed molecular regulations that is very important to precision medicine. However, it is still a significant challenge to design effective and efficient method to infer the network structure and dynamic property of regulatory networks. In recent years a number of computing methods have been designed to explore the regulatory mechanisms as well as estimate unknown model parameters. Among them, the Bayesian inference method can combine both prior knowledge and experimental data to generate updated information regarding the regulatory mechanisms. This chapter gives a brief review for Bayesian statistical methods that are used to infer the network structure and estimate model parameters based on experimental data.

  5. A default method to specify skeletons for Bayesian model averaging continual reassessment method for phase I clinical trials.

    Science.gov (United States)

    Pan, Haitao; Yuan, Ying

    2017-01-30

    The Bayesian model averaging continual reassessment method (CRM) is a Bayesian dose-finding design. It improves the robustness and overall performance of the continual reassessment method (CRM) by specifying multiple skeletons (or models) and then using Bayesian model averaging to automatically favor the best-fitting model for better decision making. Specifying multiple skeletons, however, can be challenging for practitioners. In this paper, we propose a default way to specify skeletons for the Bayesian model averaging CRM. We show that skeletons that appear rather different may actually lead to equivalent models. Motivated by this, we define a nonequivalence measure to index the difference among skeletons. Using this measure, we extend the model calibration method of Lee and Cheung (2009) to choose the optimal skeletons that maximize the average percentage of correct selection of the maximum tolerated dose and ensure sufficient nonequivalence among the skeletons. Our simulation study shows that the proposed method has desirable operating characteristics. We provide software to implement the proposed method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Partition functions with spin in AdS2 via quasinormal mode methods

    International Nuclear Information System (INIS)

    Keeler, Cynthia; Lisbão, Pedro; Ng, Gim Seng

    2016-01-01

    We extend the results of http://dx.doi.org/10.1007/JHEP06(2014)099, computing one loop partition functions for massive fields with spin half in AdS 2 using the quasinormal mode method proposed by Denef, Hartnoll, and Sachdev http://dx.doi.org/10.1088/0264-9381/27/12/125001. We find the finite representations of SO(2,1) for spin zero and spin half, consisting of a highest weight state |h〉 and descendants with non-unitary values of h. These finite representations capture the poles and zeroes of the one loop determinants. Together with the asymptotic behavior of the partition functions (which can be easily computed using a large mass heat kernel expansion), these are sufficient to determine the full answer for the one loop determinants. We also discuss extensions to higher dimensional AdS 2n and higher spins.

  7. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete; Connors, Alanna; Meng, Xiao-Li

    2014-01-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  8. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  9. Estimating Steatosis Prevalence in Overweight and Obese Children: Comparison of Bayesian Small Area and Direct Methods

    Directory of Open Access Journals (Sweden)

    Hamid Reza Khalkhali

    2016-09-01

    Full Text Available Background Often, there is no access to sufficient sample size to estimate the prevalence using the method of direct estimator in all areas. The aim of this study was to compare small area’s Bayesian method and direct method in estimating the prevalence of steatosis in obese and overweight children. Materials and Methods: In this cross-sectional study, was conducted on 150 overweight and obese children aged 2 to 15 years referred to the Children's digestive clinic of Urmia University of Medical Sciences- Iran, in 2013. After Body mass index (BMI calculation, children with overweight and obese were assessed in terms of primary tests of obesity screening. Then children with steatosis confirmed by abdominal Ultrasonography, were referred to the laboratory for doing further tests. Steatosis prevalence was estimated by direct and Bayesian method and their efficiency were evaluated using mean-square error Jackknife method. The study data was analyzed using the open BUGS3.1.2 and R2.15.2 software. Results: The findings indicated that estimation of steatosis prevalence in children using Bayesian and direct methods were between 0.3098 to 0.493, and 0.355 to 0.560 respectively, in Health Districts; 0.3098 to 0.502, and 0.355 to 0.550 in Education Districts; 0.321 to 0.582, and 0.357 to 0.615 in age groups; 0.313 to 0.429, and 0.383 to 0.536 in sex groups. In general, according to the results, mean-square error of Bayesian estimation was smaller than direct estimation (P

  10. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  11. Model Based Beamforming and Bayesian Inversion Signal Processing Methods for Seismic Localization of Underground Source

    DEFF Research Database (Denmark)

    Oh, Geok Lian

    properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the probability density function permits the incorporation of a priori information about the parameters, and also allow for incorporation of theoretical errors. This opens up the possibilities of application of inverse paradigm in the real-world geophysics inversion problems. In this PhD study, the Bayesian...

  12. Remarkable phylogenetic resolution of the most complex clade of Cyprinidae (Teleostei: Cypriniformes): a proof of concept of homology assessment and partitioning sequence data integrated with mixed model Bayesian analyses.

    Science.gov (United States)

    Tao, Wenjing; Mayden, Richard L; He, Shunping

    2013-03-01

    Despite many efforts to resolve evolutionary relationships among major clades of Cyprinidae, some nodes have been especially problematic and remain unresolved. In this study, we employ four nuclear gene fragments (3.3kb) to infer interrelationships of the Cyprinidae. A reconstruction of the phylogenetic relationships within the family using maximum parsimony, maximum likelihood, and Bayesian analyses is presented. Among the taxa within the monophyletic Cyprinidae, Rasborinae is the basal-most lineage; Cyprinine is sister to Leuciscine. The monophyly for the subfamilies Gobioninae, Leuciscinae and Acheilognathinae were resolved with high nodal support. Although our results do not completely resolve relationships within Cyprinidae, this study presents novel and significant findings having major implications for a highly diverse and enigmatic clade of East-Asian cyprinids. Within this monophyletic group five closely-related subgroups are identified. Tinca tinca, one of the most phylogenetically enigmatic genera in the family, is strongly supported as having evolutionary affinities with this East-Asian clade; an established yet remarkable association because of the natural variation in phenotypes and generalized ecological niches occupied by these taxa. Our results clearly argue that the choice of partitioning strategies has significant impacts on the phylogenetic reconstructions, especially when multiple genes are being considered. The most highly partitioned model (partitioned by codon positions within genes) extracts the strongest phylogenetic signals and performs better than any other partitioning schemes supported by the strongest 2Δln Bayes factor. Future studies should include higher levels of taxon sampling and partitioned, model-based analyses. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Evaluating Bayesian spatial methods for modelling species distributions with clumped and restricted occurrence data.

    Directory of Open Access Journals (Sweden)

    David W Redding

    Full Text Available Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT, to a spatial Bayesian SDM method (fitted using R-INLA, when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account

  14. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  15. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  16. Applying graph partitioning methods in measurement-based dynamic load balancing

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fourestier, Sebastien [Univ. of Bordeaux (France). Bordeaux Lab. for Research in Computer Science; Menon, Harshitha [Univ. of Illinois, Urbana-Champaign, IL (United States); Kale, Laxmikant V. [Univ. of Illinois, Urbana-Champaign, IL (United States); Pellegrini, Francois [Univ. of Bordeaux (France). Bordeaux Lab. for Research in Computer Science

    2011-09-26

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, for measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.

  17. Fission yield covariances for JEFF: A Bayesian Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Leray Olivier

    2017-01-01

    Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.

  18. Bayesian Method for Building Frequent Landsat-Like NDVI Datasets by Integrating MODIS and Landsat NDVI

    OpenAIRE

    Limin Liao; Jinling Song; Jindi Wang; Zhiqiang Xiao; Jian Wang

    2016-01-01

    Studies related to vegetation dynamics in heterogeneous landscapes often require Normalized Difference Vegetation Index (NDVI) datasets with both high spatial resolution and frequent coverage, which cannot be satisfied by a single sensor due to technical limitations. In this study, we propose a new method called NDVI-Bayesian Spatiotemporal Fusion Model (NDVI-BSFM) for accurately and effectively building frequent high spatial resolution Landsat-like NDVI datasets by integrating Moderate Resol...

  19. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Bhagwat, Nikhil V. [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  20. Calculation of partition functions and free energies of a binary mixture using the energy partitioning method: application to carbon dioxide and methane.

    Science.gov (United States)

    Do, Hainam; Hirst, Jonathan D; Wheatley, Richard J

    2012-04-19

    It is challenging to compute the partition function (Q) for systems with enormous configurational spaces, such as fluids. Recently, we developed a Monte Carlo technique (an energy partitioning method) for computing Q [ J. Chem. Phys. 2011 , 135 , 174105 ]. In this paper, we use this approach to compute the partition function of a binary fluid mixture (carbon dioxide + methane); this allows us to obtain the Helmholtz free energy (F) via F = -k(B)T ln Q and the Gibbs free energy (G) via G = F + pV. We then utilize G to obtain the coexisting mole fraction curves. The chemical potential of each species is also obtained. At the vapor-liquid equilibrium condition, the chemical potential of methane significantly increases, while that of carbon dioxide slightly decreases, as the pressure increases along an isotherm. Since Q is obtained from the density of states, which is independent of the temperature, equilibrium thermodynamic properties at any condition can be obtained by varying the total composition and volume of the system. Our methodology can be adapted to explore the free energies of other binary mixtures in general and of those containing CO(2) in particular. Since the method gives access to the free energy and chemical potentials, it will be useful in many other applications.

  1. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  2. A new method for E-government procurement using collaborative filtering and Bayesian approach.

    Science.gov (United States)

    Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  3. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    2010-01-01

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Based on data from the Danish passenger railway operator DSB S-tog A/S, a solution method to the train driver recovery problem (TDRP) is developed. The TDRP is formulated as a set...... partitioning problem. We define a disruption neighbourhood by identifying a small set of drivers and train tasks directly affected by the disruption. Based on the disruption neighbourhood, the TDRP model is formed and solved. If the TDRP solution provides a feasible recovery for the drivers within...

  4. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    Science.gov (United States)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback–Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  5. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures.

    Science.gov (United States)

    Puncher, M; Birchall, A; Bull, R K

    2012-08-01

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q(0.025) and Q(0.975) quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-72 hr. The advantages and disadvantages of the method are discussed.

  6. Poles tracking of weakly nonlinear structures using a Bayesian smoothing method

    Science.gov (United States)

    Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc

    2017-02-01

    This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.

  7. Implementing statistical learning methods through Bayesian networks. Part 1: a guide to Bayesian parameter estimation using forensic science data.

    Science.gov (United States)

    Biedermann, A; Taroni, F; Bozza, S

    2009-12-15

    As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches--with the help of Bayesian networks--for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).

  8. Probability-informed testing for reliability assurance through Bayesian hypothesis methods

    International Nuclear Information System (INIS)

    Smith, Curtis; Kelly, Dana; Dezfuli, Homayoon

    2010-01-01

    Bayesian inference techniques play a central role in modern risk and reliability evaluations of complex engineering systems. These techniques allow the system performance data and any relevant associated information to be used collectively to calculate the probabilities of various types of hypotheses that are formulated as part of reliability assurance activities. This paper proposes a methodology based on Bayesian hypothesis testing to determine the number of tests that would be required to demonstrate that a system-level reliability target is met with a specified probability level. Recognizing that full-scale testing of a complex system is often not practical, testing schemes are developed at the subsystem level to achieve the overall system reliability target. The approach uses network modeling techniques to transform the topology of the system into logic structures consisting of series and parallel subsystems. The paper addresses the consideration of cost in devising subsystem level test schemes. The developed techniques are demonstrated using several examples. All analyses are carried out using the Bayesian analysis tool WinBUGS, which uses Markov chain Monte Carlo simulation methods to carry out inference over the network.

  9. Use of indexed historical floods in flood frequency estimation with Fuzzy Bayesian methods

    Science.gov (United States)

    Salinas, Jose; Viglione, Alberto; Kiss, Andrea; Bloeschl, Guenter

    2015-04-01

    Efforts of the historical environmental extremes community during the last decades have resulted in the existence of long time series of floods, for example in Central Europe and the Mediterranean region, which in some cases range longer than 500 years in the past. In most of the cases the flood time series are presented in terms of indices, representing a combination of socio-economic indicators for the flood impact, e.g. economic damage, flood duration and extension, ... In hydrological engineering, historical floods are very useful because they give additional information which will reduce the uncertainty in estimates of discharges with low annual exceedance probabilities, i.e. with high return periods. In order to use the historical floods in formal flood frequency analysis, the precise value of the peak discharges would ideally be known, but as commented, they are most usually given in term of indices. This work presents a novel method on how to obtain a prior distribution for the parameters of the annual peak discharges distribution from indexed historical floods time series. The prior distribution is incorporated in the flood frequency estimation via Bayesian methods (see e.g. Viglione et al., 2013) in order to reduce the uncertainties in the design flood estimates. The historical data used is subject to a high degree of uncertainty and unpreciseness. In this sense, a framework is presented where the discharge thresholds between flood indices are modeled as fuzzy numbers. These fuzzy thresholds will define a fuzzy prior distribution, which will requires to apply Fuzzy Bayesian Inference (Viertl, 2008ab) to obtain fuzzy credibility intervals for the design floods. Viertl, R. (2008a) Foundations of Fuzzy Bayesian Inference, Journal of Uncertain Systems, 2, 187-191. Viertl, R. (2008b) Fuzzy Bayesian Inference. In: Soft Methods For Handling Variability And Imprecision. Advances In Soft Computing. Vol. 48. Springer-Verlag Berlin, pp 10-15. Viglione, A., R. Merz

  10. Robust modelling of solubility in supercritical carbon dioxide using Bayesian methods.

    Science.gov (United States)

    Tarasova, Anna; Burden, Frank; Gasteiger, Johann; Winkler, David A

    2010-04-01

    Two sparse Bayesian methods were used to derive predictive models of solubility of organic dyes and polycyclic aromatic compounds in supercritical carbon dioxide (scCO(2)), over a wide range of temperatures (285.9-423.2K) and pressures (60-1400 bar): a multiple linear regression employing an expectation maximization algorithm and a sparse prior (MLREM) method and a non-linear Bayesian Regularized Artificial Neural Network with a Laplacian Prior (BRANNLP). A randomly selected test set was used to estimate the predictive ability of the models. The MLREM method resulted in a model of similar predictivity to the less sparse MLR method, while the non-linear BRANNLP method created models of substantially better predictivity than either the MLREM or MLR based models. The BRANNLP method simultaneously generated context-relevant subsets of descriptors and a robust, non-linear quantitative structure-property relationship (QSPR) model for the compound solubility in scCO(2). The differences between linear and non-linear descriptor selection methods are discussed. (c) 2009 Elsevier Inc. All rights reserved.

  11. Bayesian inference for data assimilation using Least-Squares Finite Element methods

    International Nuclear Information System (INIS)

    Dwight, Richard P

    2010-01-01

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

  12. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers

    International Nuclear Information System (INIS)

    Reginatto, Marcel; Zimbal, Andreas

    2008-01-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements

  13. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  14. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    Science.gov (United States)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  15. Partition coefficients for alcohol tracers between nonaqueous-phase liquids and water from UNIFAC-solubility method

    Science.gov (United States)

    Wang, Peng; Dwarakanath, Varadarajan; Rouse, Bruce A.; Pope, Gary A.; Sepehrnoori, Kamy

    In this work, we have applied a group-contribution activity-coefficient model, UNIFAC, and the solubility of alcohols in water to estimate partition coefficients for alcohol tracers between water and nonaqueous-phase liquids (NAPLs). The effects of temperature and mutual solubility between NAPL and aqueous phases on the estimation of partition coefficients were also investigated. By comparing the estimated results with experimental partition coefficients for 30 alcohol tracers between 10 NAPLs and water, we found that: i) the UNIFAC-solubility method, in which the UNIFAC model in its infinite-dilution form is applied to the NAPL phase and the solubility of tracers in water is used for estimation of the activity coefficient in the aqueous phase, works better than the UNIFAC model; ii) a linear relation between the logarithm of partition coefficients and the logarithm of tracer solubility in water is observed for those tracers having a similar chemical structure (i.e. the same number of branched methyl groups). This can serve as a useful tool for quick selection of the tracers that exhibit the desired partition coefficients; iii) the effect of mutual solubility between NAPL and aqueous phases can be neglected because such miscibility is very small, usually of the order of 10 -3 mole/mole unit; and iv) temperature variation between 15° and 25°C does not significantly affect partition coefficients.

  16. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  17. A nonparametric Bayesian method of translating machine learning scores to probabilities in clinical decision support.

    Science.gov (United States)

    Connolly, Brian; Cohen, K Bretonnel; Santel, Daniel; Bayram, Ulya; Pestian, John

    2017-08-07

    Probabilistic assessments of clinical care are essential for quality care. Yet, machine learning, which supports this care process has been limited to categorical results. To maximize its usefulness, it is important to find novel approaches that calibrate the ML output with a likelihood scale. Current state-of-the-art calibration methods are generally accurate and applicable to many ML models, but improved granularity and accuracy of such methods would increase the information available for clinical decision making. This novel non-parametric Bayesian approach is demonstrated on a variety of data sets, including simulated classifier outputs, biomedical data sets from the University of California, Irvine (UCI) Machine Learning Repository, and a clinical data set built to determine suicide risk from the language of emergency department patients. The method is first demonstrated on support-vector machine (SVM) models, which generally produce well-behaved, well understood scores. The method produces calibrations that are comparable to the state-of-the-art Bayesian Binning in Quantiles (BBQ) method when the SVM models are able to effectively separate cases and controls. However, as the SVM models' ability to discriminate classes decreases, our approach yields more granular and dynamic calibrated probabilities comparing to the BBQ method. Improvements in granularity and range are even more dramatic when the discrimination between the classes is artificially degraded by replacing the SVM model with an ad hoc k-means classifier. The method allows both clinicians and patients to have a more nuanced view of the output of an ML model, allowing better decision making. The method is demonstrated on simulated data, various biomedical data sets and a clinical data set, to which diverse ML methods are applied. Trivially extending the method to (non-ML) clinical scores is also discussed.

  18. An Advanced Bayesian Method for Short-Term Probabilistic Forecasting of the Generation of Wind Power

    Directory of Open Access Journals (Sweden)

    Antonio Bracale

    2015-09-01

    Full Text Available Currently, among renewable distributed generation systems, wind generators are receiving a great deal of interest due to the great economic, technological, and environmental incentives they involve. However, the uncertainties due to the intermittent nature of wind energy make it difficult to operate electrical power systems optimally and make decisions that satisfy the needs of all the stakeholders of the electricity energy market. Thus, there is increasing interest determining how to forecast wind power production accurately. Most the methods that have been published in the relevant literature provided deterministic forecasts even though great interest has been focused recently on probabilistic forecast methods. In this paper, an advanced probabilistic method is proposed for short-term forecasting of wind power production. A mixture of two Weibull distributions was used as a probability function to model the uncertainties associated with wind speed. Then, a Bayesian inference approach with a particularly-effective, autoregressive, integrated, moving-average model was used to determine the parameters of the mixture Weibull distribution. Numerical applications also are presented to provide evidence of the forecasting performance of the Bayesian-based approach.

  19. Lifetime modelling with a Weibull law: comparison of three Bayesian Methods

    International Nuclear Information System (INIS)

    Billy, F.; Remy, E.; Bousquet, N.; Celeux, G.

    2006-01-01

    For a nuclear power plant, being able to estimate the lifetime of important components is strategic. But data is usually insufficient to do so. Thus, it is relevant to use expertise, together with data, in order to assess the value of lifetime on the grounds of both sources. The Bayesian frame and the choice of a Weibull law to model the random time for replacement are relevant. They have been chosen for this article. Two indicators are computed : the mean lifetime of any component and the mean residual lifetime of a given component, after it has been controlled. Three different Bayesian methods are compared on three sets of data. The article shows that the three methods lead to coherent results and that uncertainties are strongly reduced. The method developed around PMC has two main advantages: it models a conditional dependence of the two parameters of the Weibull law, which enables more coherent results on the prior; it has a parameter that weights the strength of the expertise. This last point is very important to do lifetime assessments, because then, expertise is not used to increase too small samples as much as to do a real extrapolation, far beyond what data itself say. (authors)

  20. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    Science.gov (United States)

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  1. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    Science.gov (United States)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice

  2. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    Science.gov (United States)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  3. Estimated value of insurance premium due to Citarum River flood by using Bayesian method

    Science.gov (United States)

    Sukono; Aisah, I.; Tampubolon, Y. R. H.; Napitupulu, H.; Supian, S.; Subiyanto; Sidi, P.

    2018-03-01

    Citarum river flood in South Bandung, West Java Indonesia, often happens every year. It causes property damage, producing economic loss. The risk of loss can be mitigated by following the flood insurance program. In this paper, we discussed about the estimated value of insurance premiums due to Citarum river flood by Bayesian method. It is assumed that the risk data for flood losses follows the Pareto distribution with the right fat-tail. The estimation of distribution model parameters is done by using Bayesian method. First, parameter estimation is done with assumption that prior comes from Gamma distribution family, while observation data follow Pareto distribution. Second, flood loss data is simulated based on the probability of damage in each flood affected area. The result of the analysis shows that the estimated premium value of insurance based on pure premium principle is as follows: for the loss value of IDR 629.65 million of premium IDR 338.63 million; for a loss of IDR 584.30 million of its premium IDR 314.24 million; and the loss value of IDR 574.53 million of its premium IDR 308.95 million. The premium value estimator can be used as neither a reference in the decision of reasonable premium determination, so as not to incriminate the insured, nor it result in loss of the insurer.

  4. Distinguishing real from fake ivory products by elemental analyses: A Bayesian hybrid classification method.

    Science.gov (United States)

    Buddhachat, Kittisak; Brown, Janine L; Thitaram, Chatchote; Klinhom, Sarisa; Nganvongpanit, Korakot

    2017-03-01

    As laws tighten to limit commercial ivory trading and protect threatened species like whales and elephants, increased sales of fake ivory products have become widespread. This study describes a method, handheld X-ray fluorescence (XRF) as a noninvasive technique for elemental analysis, to differentiate quickly between ivory (Asian and African elephant, mammoth) from non-ivory (bones, teeth, antler, horn, wood, synthetic resin, rock) materials. An equation consisting of 20 elements and light elements from a stepwise discriminant analysis was used to classify samples, followed by Bayesian binary regression to determine the probability of a sample being 'ivory', with complementary log log analysis to identify the best fit model for this purpose. This Bayesian hybrid classification model was 93% accurate with 92% precision in discriminating ivory from non-ivory materials. The method was then validated by scanning an additional ivory and non-ivory samples, correctly identifying bone as not ivory with >95% accuracy, except elephant bone, which was 72%. It was less accurate for wood and rock (25-85%); however, a preliminary screening to determine if samples are not Ca-dominant could eliminate inorganic materials. In conclusion, elemental analyses by XRF can be used to identify several forms of fake ivory samples, which could have forensic application. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Stress partitioning behavior in an fcc alloy evaluated by the in situ/ex situ EBSD-Wilkinson method

    International Nuclear Information System (INIS)

    Ojima, Mayumi; Adachi, Yoshitaka; Suzuki, Seiichi; Tomota, Yo

    2011-01-01

    Hierarchical stress partitioning behavior among grains in the elasto-plastic region of a polycrystalline material was studied by a combined technique of in situ/ex situ electron backscattering diffraction based on local strain measurements (the EBSD-Wilkinson method) and neutron diffraction measurements during tensile deformation. Elastic strains parallel to the tensile direction both during loading (e11) and after unloading (e'11) were measured. The volume-averaged stress partitioning among [hkl] family grains measured by the EBSD-Wilkinson method was in good agreement with that measured by neutron diffraction measurements, but a more complicated strain distribution occurred microscopically because of restriction from the surrounding grains.

  6. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    Science.gov (United States)

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  7. Partition functions in even dimensional AdS via quasinormal mode methods

    International Nuclear Information System (INIS)

    Keeler, Cynthia; Ng, Gim Seng

    2014-01-01

    In this note, we calculate the one-loop determinant for a massive scalar (with conformal dimension Δ) in even-dimensional AdS d+1 space, using the quasinormal mode method developed in http://dx.doi.org/10.1088/0264-9381/27/12/125001 by Denef, Hartnoll, and Sachdev. Working first in two dimensions on the related Euclidean hyperbolic plane H 2 , we find a series of zero modes for negative real values of Δ whose presence indicates a series of poles in the one-loop partition function Z(Δ) in the Δ complex plane; these poles contribute temperature-independent terms to the thermal AdS partition function computed in http://dx.doi.org/10.1088/0264-9381/27/12/125001. Our results match those in a series of papers by Camporesi and Higuchi, as well as Gopakumar et al. http://dx.doi.org/10.1007/JHEP11(2011)010 and Banerjee et al. http://dx.doi.org/10.1007/JHEP03(2011)147. We additionally examine the meaning of these zero modes, finding that they Wick-rotate to quasinormal modes of the AdS 2 black hole. They are also interpretable as matrix elements of the discrete series representations of SO(2,1) in the space of smooth functions on S 1 . We generalize our results to general even dimensional AdS 2n , again finding a series of zero modes which are related to discrete series representations of SO(2n,1), the motion group of H 2n .

  8. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    Lin, Yufei; Chen, Maoyin; Zhou, Donghua

    2013-01-01

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  9. A Default Method to Specify Skeletons for Bayesian Model Averaging Continual Reassessment Method for Phase I Clinical Trials

    Science.gov (United States)

    Pan, Haitao; Yuan, Ying

    2016-01-01

    The Bayesian model averaging continual reassessment method (BMA-CRM) is an extension of the continual reassessment method (CRM) for dose finding. The BMA-CRM improves the robustness and overall performance of the CRM by specifying multiple skeletons (or models) and then using Bayesian model averaging to automatically favor the best fitting model for robust decision making. Specifying multiple skeletons, however, can be challenging for practitioners. In this paper, we propose a default way to specify skeletons for the BMA-CRM. We show that skeletons that appear rather different may actually lead to equivalent models. Motivated by this, we define a nonequivalence measure to index the difference among skeletons. Using this measure, we extend the model calibration method of Lee and Cheung (2009) to choose the optimal skeletons that maximize the average percentage of correct selection of the maximum tolerated dose and ensure sufficient nonequivalence among the skeletons. Our simulation study shows that the proposed method has desirable operating characteristics. We provide software to implement the proposed method. PMID:26991076

  10. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Directory of Open Access Journals (Sweden)

    Nazia Afreen

    2016-03-01

    Full Text Available Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  11. Bayesian methods for addressing long-standing problems in associative learning: The case of PREE.

    Science.gov (United States)

    Blanco, Fernando; Moris, Joaquín

    2017-07-20

    Most associative models typically assume that learning can be understood as a gradual change in associative strength that captures the situation into one single parameter, or representational state. We will call this view single-state learning. However, there is ample evidence showing that under many circumstances different relationships that share features can be learned independently, and animals can quickly switch between expressing one or another. We will call this multiple-state learning. Theoretically, it is understudied because it needs a different data analysis approach from those usually employed. In this paper, we present a Bayesian model of the Partial Reinforcement Extinction Effect (PREE) that can test the predictions of the multiple-state view. This implies estimating the moment of change in the responses (from the acquisition to the extinction performance), both at the individual and at the group levels. We used this model to analyze data from a PREE experiment with three levels of reinforcement during acquisition (100%, 75% and 50%). We found differences in the estimated moment of switch between states during extinction, so that it was delayed after leaner partial reinforcement schedules. The finding is compatible with the multiple-state view. It is the first time, to our knowledge, that the predictions from the multiple-state view are tested directly. The paper also aims to show the benefits that Bayesian methods can bring to the associative learning field.

  12. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Science.gov (United States)

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  13. bcrm: Bayesian Continual Reassessment Method Designs for Phase I Dose-Finding Trials

    Directory of Open Access Journals (Sweden)

    Michael Sweeting

    2013-09-01

    Full Text Available This paper presents the R package bcrm for conducting and assessing Bayesian continual reassessment method (CRM designs in Phase I dose-escalation trials. CRM designsare a class of adaptive design that select the dose to be given to the next recruited patient based on accumulating toxicity data from patients already recruited into the trial, often using Bayesian methodology. Despite the original CRM design being proposed in 1990, the methodology is still not widely implemented within oncology Phase I trials. The aim of this paper is to demonstrate, through example of the bcrm package, how a variety of possible designs can be easily implemented within the R statistical software, and how properties of the designs can be communicated to trial investigators using simple textual and graphical output obtained from the package. This in turn should facilitate an iterative process to allow a design to be chosen that is suitable to the needs of the investigator. Our bcrm package is the first to offer a large comprehensive choice of CRM designs, priors and escalation procedures, which can be easily compared and contrasted within the package through the assessment of operating characteristics.

  14. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  15. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    D. Lu

    2017-09-01

    Full Text Available Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  16. The Continual Reassessment Method for Multiple Toxicity Grades: A Bayesian Model Selection Approach

    Science.gov (United States)

    Yuan, Ying; Zhang, Shemin; Zhang, Wenhong; Li, Chanjuan; Wang, Ling; Xia, Jielai

    2014-01-01

    Grade information has been considered in Yuan et al. (2007) wherein they proposed a Quasi-CRM method to incorporate the grade toxicity information in phase I trials. A potential problem with the Quasi-CRM model is that the choice of skeleton may dramatically vary the performance of the CRM model, which results in similar consequences for the Quasi-CRM model. In this paper, we propose a new model by utilizing bayesian model selection approach – Robust Quasi-CRM model – to tackle the above-mentioned pitfall with the Quasi-CRM model. The Robust Quasi-CRM model literally inherits the BMA-CRM model proposed by Yin and Yuan (2009) to consider a parallel of skeletons for Quasi-CRM. The superior performance of Robust Quasi-CRM model was demonstrated by extensive simulation studies. We conclude that the proposed method can be freely used in real practice. PMID:24875783

  17. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  18. Detection method of vegetable maturity based on neural network and bayesian information fusions

    Science.gov (United States)

    Liang, Fan; Chen, Hong-Dou; Cui, Shi-Gang; Yang, Li-Li; Wu, Xing-Li

    2015-12-01

    In order to better grasp the maturity of vegetables, this paper proposes a method which makes full use of external morphological characteristics of vegetables to infer the maturity of vegetables. Especially extracting the morphological features of the root and combine them with the ground morphological features. In this paper, firstly, vegetable images are disposed by threshold segmentation and feature extraction using the image processing toolbox of Matlab. Through this way, the value of leaf crown projected area, plant height, root length and root side area will be got. Secondly, Features of ground part and underground part can be used as training samples for corresponding neural network maturity detection models. Ultimately, Bayesian theory is utilized to process information fusion with obtained values of each neural network. The results show that this method improved the accuracy of detection.

  19. Estimation of octanol/water partition coefficient and aqueous solubility of environmental chemicals using molecular fingerprints and machine learning methods

    Science.gov (United States)

    Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

  20. Repeatability of n-octanol/water partition coefficient values between liquid chromatography measurement methods.

    Science.gov (United States)

    Saranjampour, Parichehr; Armbrust, Kevin

    2018-03-20

    The n-octanol/water partition coefficient (K OW ) is a physical/chemical property that is extensively used for regulatory and environmental risk and exposure assessments. The K OW value can estimate various chemical properties such as water solubility, bioavailability, and toxicity using quantitative structure-activity relationships which demands an accurate knowledge of this property. The present investigation aims to compare outcomes of three commonly cited methods of K OW measurement in the literature for six hydrophobic chemicals with insecticidal functions as well as highly volatile petroleum constituents. This measurement has been difficult to obtain for the selected pyrethroid insecticides, cypermethrin, and bifenthrin and is a novel measurement for the latter: polycyclic aromatic sulfur heterocycles, dibenzothiophene (DBT), and three of its alkyl derivatives except for DBT. The K OW values were obtained using two liquid chromatographic methods with isocratic and gradient programming, and the slow-stirring method following OECD 117 and 123 guidelines, respectively. The mean log K OW values of bifenthrin, cypermethrin, DBT, methyl-DBT, dimethyl-DBT, and diethyl-DBT were 8.4 ± 0.1, 6.0 ± 0.3, 4.8 ± 0.0, 5.4 ± 0.1, 6.0 ± 0.1, and 6.8 ± 0.0 using the HPLC method with gradient programing. The K OW values were significantly reproducible within a method, however, not between the methods. Results suggest assessing a chemical's property and environmental risk and exposure solely based on the K OW value should be practiced with caution.

  1. Solving many-body Schrödinger equations with kinetic energy partition method

    Science.gov (United States)

    Chen, Yu-Hsin; Chao, Sheng D.

    2018-01-01

    We present a general formulation of our previously developed kinetic energy partition (KEP) method for solving many-bodySchrödinger equations. In atomic physics, as well as in general molecular and solid state physics, solving many-electronSchrödinger equations is a very challenging task, often called Dirac's challenge. The central problem is how to properly handle the electron-electron Coulomb repulsion interactions. Using the KEP solution scheme, in addition to dividing the kinetic energy into partial terms, the electron-electron Coulomb interaction is also separated into parts to be associated with a "negative mass" kinetic energy term. Therefore, the full Hamiltonian can be expressed as a simple sum of subsystem Hamiltonians, each representing an effective one-body problem. Using a Hartree-like product in constructing the wave-function, we achieve fast convergence in the calculations of the ground state energies. First, the model Moshinsky atoms are used to illustrate the solution procedure. We then apply this new KEP method to harmonium atoms and obtain precise energies with an error less than 5% using only two basis functions from each subsystem. It is thus very promising that this methodology, when further extended, can be useful for general many-body systems.

  2. Suspected pulmonary embolism and lung scan interpretation: Trial of a Bayesian reporting method

    International Nuclear Information System (INIS)

    Becker, D.M.; Philbrick, J.T.; Schoonover, F.W.; Teates, C.D.

    1990-01-01

    The objective of this research is to determine whether a Bayesian method of lung scan (LS) reporting could influence the management of patients with suspected pulmonary embolism (PE). The study is performed by the following: (1) A descriptive study of the diagnostic process for suspected PE using the new reporting method; (2) a non-experimental evaluation of the reporting method comparing prospective patients and historical controls; and (3) a survey of physicians' reactions to the reporting innovation. Of 148 consecutive patients enrolled at the time of LS, 129 were completely evaluated; 75 patients scanned the previous year served as controls. The LS results of patients with suspected PE were reported as posttest probabilities of PE calculated from physician-provided pretest probabilities and the likelihood ratios for PE of LS interpretations. Despite the Bayesian intervention, the confirmation or exclusion of PE was often based on inconclusive evidence. PE was considered by the clinician to be ruled out in 98% of patients with posttest probabilities less than 25% and ruled in for 95% of patients with posttest probabilities greater than 75%. Prospective patients and historical controls were similar in terms of tests ordered after the LS (e.g., pulmonary angiography). Patients with intermediate or indeterminate lung scan results had the highest proportion of subsequent testing. Most physicians (80%) found the reporting innovation to be helpful, either because it confirmed clinical judgement (94 cases) or because it led to additional testing (7 cases). Despite the probabilistic guidance provided by the study, the diagnosis of PE was often neither clearly established nor excluded. While physicians appreciated the innovation and were not confused by the terminology, their clinical decision making was not clearly enhanced

  3. A computer program for uncertainty analysis integrating regression and Bayesian methods

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  4. From "weight of evidence" to quantitative data integration using multicriteria decision analysis and Bayesian methods.

    Science.gov (United States)

    Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas

    2015-01-01

    "Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis.

  5. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  6. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  7. The determination of nuclear charge distributions using a Bayesian maximum entropy method

    International Nuclear Information System (INIS)

    Macaulay, V.A.; Buck, B.

    1995-01-01

    We treat the inference of nuclear charge densities from measurements of elastic electron scattering cross sections. In order to get the most reliable information from expensively acquired, incomplete and noisy measurements, we use Bayesian probability theory. Very little prior information about the charge densities is assumed. We derive a prior probability distribution which is a generalization of a form used widely in image restoration based on the entropy of a physical density. From the posterior distribution of possible densities, we select the most probable one, and show how error bars can be evaluated. These have very reasonable properties, such as increasing without bound as hypotheses about finer scale structures are included in the hypothesis space. The methods are demonstrated by using data on the nuclei 4 He and 12 C. (orig.)

  8. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... of multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable....... This can be implemented using a Bayesian approach, which is next extended to include multiple genetic markers. We then propose a hierarchical model for undertaking a meta-analysis of multiple studies, in which it is not necessary that the same genetic markers are measured in each study. This provides...

  9. Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method

    Science.gov (United States)

    Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.

    2018-01-01

    Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.

  10. A Comparison of ML, WLSMV, and Bayesian Methods for Multilevel Structural Equation Models in Small Samples: A Simulation Study.

    Science.gov (United States)

    Holtmann, Jana; Koch, Tobias; Lochner, Katharina; Eid, Michael

    2016-01-01

    Multilevel structural equation models are increasingly applied in psychological research. With increasing model complexity, estimation becomes computationally demanding, and small sample sizes pose further challenges on estimation methods relying on asymptotic theory. Recent developments of Bayesian estimation techniques may help to overcome the shortcomings of classical estimation techniques. The use of potentially inaccurate prior information may, however, have detrimental effects, especially in small samples. The present Monte Carlo simulation study compares the statistical performance of classical estimation techniques with Bayesian estimation using different prior specifications for a two-level SEM with either continuous or ordinal indicators. Using two software programs (Mplus and Stan), differential effects of between- and within-level sample sizes on estimation accuracy were investigated. Moreover, it was tested to which extent inaccurate priors may have detrimental effects on parameter estimates in categorical indicator models. For continuous indicators, Bayesian estimation did not show performance advantages over ML. For categorical indicators, Bayesian estimation outperformed WLSMV solely in case of strongly informative accurate priors. Weakly informative inaccurate priors did not deteriorate performance of the Bayesian approach, while strong informative inaccurate priors led to severely biased estimates even with large sample sizes. With diffuse priors, Stan yielded better results than Mplus in terms of parameter estimates.

  11. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    Full Text Available In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method, for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  12. Efficient Methods for Bayesian Uncertainty Analysis and Global Optimization of Computationally Expensive Environmental Models

    Science.gov (United States)

    Shoemaker, Christine; Espinet, Antoine; Pang, Min

    2015-04-01

    Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.

  13. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  14. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  15. The continual reassessment method: comparison of Bayesian stopping rules for dose-ranging studies.

    Science.gov (United States)

    Zohar, S; Chevret, S

    2001-10-15

    The continual reassessment method (CRM) provides a Bayesian estimation of the maximum tolerated dose (MTD) in phase I clinical trials and is also used to estimate the minimal efficacy dose (MED) in phase II clinical trials. In this paper we propose Bayesian stopping rules for the CRM, based on either posterior or predictive probability distributions that can be applied sequentially during the trial. These rules aim at early detection of either the mis-choice of dose range or a prefixed gain in the point estimate or accuracy of estimated probability of response associated with the MTD (or MED). They were compared through a simulation study under six situations that could represent the underlying unknown dose-response (either toxicity or failure) relationship, in terms of sample size, probability of correct selection and bias of the response probability associated to the MTD (or MED). Our results show that the stopping rules act correctly, with early stopping by using the two first rules based on the posterior distribution when the actual underlying dose-response relationship is far from that initially supposed, while the rules based on predictive gain functions provide a discontinuation of inclusions whatever the actual dose-response curve after 20 patients on average, that is, depending mostly on the accumulated data. The stopping rules were then applied to a data set from a dose-ranging phase II clinical trial aiming at estimating the MED dose of midazolam in the sedation of infants during cardiac catheterization. All these findings suggest the early use of the two first rules to detect a mis-choice of dose range, while they confirm the requirement of including at least 20 patients at the same dose to reach an accurate estimate of MTD (MED). A two-stage design is under study. Copyright 2001 John Wiley & Sons, Ltd.

  16. A Bayesian-based multilevel factorial analysis method for analyzing parameter uncertainty of hydrological model

    Science.gov (United States)

    Liu, Y. R.; Li, Y. P.; Huang, G. H.; Zhang, J. L.; Fan, Y. R.

    2017-10-01

    In this study, a Bayesian-based multilevel factorial analysis (BMFA) method is developed to assess parameter uncertainties and their effects on hydrological model responses. In BMFA, Differential Evolution Adaptive Metropolis (DREAM) algorithm is employed to approximate the posterior distributions of model parameters with Bayesian inference; factorial analysis (FA) technique is used for measuring the specific variations of hydrological responses in terms of posterior distributions to investigate the individual and interactive effects of parameters on model outputs. BMFA is then applied to a case study of the Jinghe River watershed in the Loess Plateau of China to display its validity and applicability. The uncertainties of four sensitive parameters, including soil conservation service runoff curve number to moisture condition II (CN2), soil hydraulic conductivity (SOL_K), plant available water capacity (SOL_AWC), and soil depth (SOL_Z), are investigated. Results reveal that (i) CN2 has positive effect on peak flow, implying that the concentrated rainfall during rainy season can cause infiltration-excess surface flow, which is an considerable contributor to peak flow in this watershed; (ii) SOL_K has positive effect on average flow, implying that the widely distributed cambisols can lead to medium percolation capacity; (iii) the interaction between SOL_AWC and SOL_Z has noticeable effect on the peak flow and their effects are dependent upon each other, which discloses that soil depth can significant influence the processes of plant uptake of soil water in this watershed. Based on the above findings, the significant parameters and the relationship among uncertain parameters can be specified, such that hydrological model's capability for simulating/predicting water resources of the Jinghe River watershed can be improved.

  17. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  18. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  19. Effective updating process of seismic fragilities using Bayesian method and information entropy

    International Nuclear Information System (INIS)

    Kato, Masaaki; Takata, Takashi; Yamaguchi, Akira

    2008-01-01

    Seismic probabilistic safety assessment (SPSA) is an effective method for evaluating overall performance of seismic safety of a plant. Seismic fragilities are estimated to quantify the seismically induced accident sequences. It is a great concern that the SPSA results involve uncertainties, a part of which comes from the uncertainty in the seismic fragility of equipment and systems. A straightforward approach to reduce the uncertainty is to perform a seismic qualification test and to reflect the results on the seismic fragility estimate. In this paper, we propose a figure-of-merit to find the most cost-effective condition of the seismic qualification tests about the acceleration level and number of components tested. Then a mathematical method to reflect the test results on the fragility update is developed. A Bayesian method is used for the fragility update procedure. Since a lognormal distribution that is used for the fragility model does not have a Bayes conjugate function, a parameterization method is proposed so that the posterior distribution expresses the characteristics of the fragility. The information entropy is used as the figure-of-merit to express importance of obtained evidence. It is found that the information entropy is strongly associated with the uncertainty of the fragility. (author)

  20. Bayesian analysis of culture and PCR methods for detection of Campylobacter spp. in broiler caecal samples.

    Science.gov (United States)

    Arnold, M E; Jones, E M; Lawes, J R; Vidal, A B; Clifton-Hadley, F A; Rodgers, J D; Powell, L F

    2015-01-01

    The objective of this study was to estimate the sensitivity and specificity of a culture method and a polymerase chain reaction (PCR) method for detection of two Campylobacter species: C. jejuni and C. coli. Data were collected during a 3-year survey of UK broiler flocks, and consisted of parallel sampling of caeca from 436 batches of birds by both PCR and culture. Batches were stratified by season (summer/non-summer) and whether they were the first depopulation of the flock, resulting in four sub-populations. A Bayesian approach in the absence of a gold standard was adopted, and the sensitivity and specificity of the PCR and culture for each Campylobacter subtype was estimated, along with the true C. jejuni and C. coli prevalence in each sub-population. Results indicated that the sensitivity of the culture method was higher than that of PCR in detecting both species when the samples were derived from populations infected with at most one species of Campylobacter. However, from a mixed population, the sensitivity of culture for detecting both C. jejuni or C. coli is reduced while PCR is potentially able to detect both species, although the total probability of correctly identifying at least one species by PCR is similar to that of the culture method.

  1. Bayesian data analysis for newcomers.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  2. DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning

    KAUST Repository

    Soufan, Othman

    2016-11-10

    Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) technique for modeling correlations between several HTS assays, meaning that a single prediction represents a subset of assigned correlated labels instead of one label. Thus, the devised method provides an increased probability for more accurate predictions of compounds that were not tested in particular assays. Results Here we present DRABAL, a novel MLC solution that incorporates structure learning of a Bayesian network as a step to model dependency between the HTS assays. In this study, DRABAL was used to process more than 1.4 million interactions of over 400,000 compounds and analyze the existing relationships between five large HTS assays from the PubChem BioAssay Database. Compared to different MLC methods, DRABAL significantly improves the F1Score by about 22%, on average. We further illustrated usefulness and utility of DRABAL through screening FDA approved drugs and reported ones that have a high probability to interact with several targets, thus enabling drug-multi-target repositioning. Specifically DRABAL suggests the Thiabendazole drug as a common activator of the NCP1 and Rab-9A proteins, both of which are designed to identify treatment modalities for the Niemann–Pick type C disease. Conclusion We developed a novel MLC solution based on a Bayesian active learning framework to overcome the challenge of lacking fully labeled training data and exploit actual dependencies between the HTS assays. The solution is motivated by the need to model dependencies between existing

  3. On the partitioning method and the perturbation quantum theory - discrete spectra

    International Nuclear Information System (INIS)

    Logrado, P.G.

    1982-05-01

    Lower and upper bounds to eigenvalues of the Schroedinger equation H Ψ = E Ψ (H = H 0 + V) and the convergence condition, in Schonberg's perturbation theory, are presented. These results are obtained using the partitioning technique. It is presented for the first time a perturbation treatment obtained when the reference function in the partitioning technique is chosen to be a true eigenfunction Ψ. The convergence condition and upper and lower bounds for the true eigenvalues E are derived in this formulation. The concept of the reaction and wave operators is also discussed. (author)

  4. Using Bayesian methods to predict climate impacts on groundwater availability and agricultural production in Punjab, India

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.

    2015-12-01

    Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.

  5. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. © 2016 The Authors.

  6. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  7. Bayesian Methods for the Physical Sciences. Learning from Examples in Astronomy and Physics.

    Science.gov (United States)

    Andreon, Stefano; Weaver, Brian

    2015-05-01

    Chapter 1: This chapter presents some basic steps for performing a good statistical analysis, all summarized in about one page. Chapter 2: This short chapter introduces the basics of probability theory inan intuitive fashion using simple examples. It also illustrates, again with examples, how to propagate errors and the difference between marginal and profile likelihoods. Chapter 3: This chapter introduces the computational tools and methods that we use for sampling from the posterior distribution. Since all numerical computations, and Bayesian ones are no exception, may end in errors, we also provide a few tips to check that the numerical computation is sampling from the posterior distribution. Chapter 4: Many of the concepts of building, running, and summarizing the resultsof a Bayesian analysis are described with this step-by-step guide using a basic (Gaussian) model. The chapter also introduces examples using Poisson and Binomial likelihoods, and how to combine repeated independent measurements. Chapter 5: All statistical analyses make assumptions, and Bayesian analyses are no exception. This chapter emphasizes that results depend on data and priors (assumptions). We illustrate this concept with examples where the prior plays greatly different roles, from major to negligible. We also provide some advice on how to look for information useful for sculpting the prior. Chapter 6: In this chapter we consider examples for which we want to estimate more than a single parameter. These common problems include estimating location and spread. We also consider examples that require the modeling of two populations (one we are interested in and a nuisance population) or averaging incompatible measurements. We also introduce quite complex examples dealing with upper limits and with a larger-than-expected scatter. Chapter 7: Rarely is a sample randomly selected from the population we wish to study. Often, samples are affected by selection effects, e.g., easier

  8. Comparison of Source Partitioning Methods for CO2 and H2O Fluxes Based on High Frequency Eddy Covariance Data

    Science.gov (United States)

    Klosterhalfen, Anne; Moene, Arnold; Schmidt, Marius; Ney, Patrizia; Graf, Alexander

    2017-04-01

    Source partitioning of eddy covariance (EC) measurements of CO2 into respiration and photosynthesis is routinely used for a better understanding of the exchange of greenhouse gases, especially between terrestrial ecosystems and the atmosphere. The most frequently used methods are usually based either on relations of fluxes to environmental drivers or on chamber measurements. However, they often depend strongly on assumptions or invasive measurements and do usually not offer partitioning estimates for latent heat fluxes into evaporation and transpiration. SCANLON and SAHU (2008) and SCANLON and KUSTAS (2010) proposed an promising method to estimate the contributions of transpiration and evaporation using measured high frequency time series of CO2 and H2O fluxes - no extra instrumentation necessary. This method (SK10 in the following) is based on the spatial separation and relative strength of sources and sinks of CO2 and water vapor among the sub-canopy and canopy. Assuming that air from those sources and sinks is not yet perfectly mixed before reaching EC sensors, partitioning is estimated based on the separate application of the flux-variance similarity theory to the stomatal and non-stomatal components of the regarded fluxes, as well as on additional assumptions on stomatal water use efficiency (WUE). The CO2 partitioning method after THOMAS et al. (2008) (TH08 in the following) also follows the argument that the dissimilarities of sources and sinks in and below a canopy affect the relation between H2O and CO2 fluctuations. Instead of involving assumptions on WUE, TH08 directly screens their scattergram for signals of joint respiration and evaporation events and applies a conditional sampling methodology. In spite of their different main targets (H2O vs. CO2), both methods can yield partitioning estimates on both fluxes. We therefore compare various sub-methods of SK10 and TH08 including own modifications (e.g., cluster analysis) to each other, to established

  9. Impact of water use efficiency parameterization on partitioning evapotranspiration with the eddy covariance flux variance method

    Science.gov (United States)

    Partitioned observations of evapotranspiration (ET) into its constituent components of soil and canopy evaporation (E) and plant transpiration (T) are needed to validate many agricultural water use models. E and T observations are also useful for assessing management practices to reduce crop water ...

  10. Digitized Onondaga Lake Dissolved Oxygen Concentrations and Model Simulated Values using Bayesian Monte Carlo Methods

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset is lake dissolved oxygen concentrations obtained form plots published by Gelda et al. (1996) and lake reaeration model simulated values using Bayesian...

  11. Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.

    Science.gov (United States)

    Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias

    2015-04-01

    Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data. Copyright © 2015 by the Genetics Society of America.

  12. A Scalable Bayesian Method for Integrating Functional Information in Genome-wide Association Studies.

    Science.gov (United States)

    Yang, Jingjing; Fritsche, Lars G; Zhou, Xiang; Abecasis, Gonçalo

    2017-09-07

    Genome-wide association studies (GWASs) have identified many complex loci. However, most loci reside in noncoding regions and have unknown biological functions. Integrative analysis that incorporates known functional information into GWASs can help elucidate the underlying biological mechanisms and prioritize important functional variants. Hence, we develop a flexible Bayesian variable selection model with efficient computational techniques for such integrative analysis. Different from previous approaches, our method models the effect-size distribution and probability of causality for variants with different annotations and jointly models genome-wide variants to account for linkage disequilibrium (LD), thus prioritizing associations based on the quantification of the annotations and allowing for multiple associated variants per locus. Our method dramatically improves both computational speed and posterior sampling convergence by taking advantage of the block-wise LD structures in human genomes. In simulations, our method accurately quantifies the functional enrichment and performs more powerfully for prioritizing the true associations than alternative methods, where the power gain is especially apparent when multiple associated variants in LD reside in the same locus. We applied our method to an in-depth GWAS of age-related macular degeneration with 33,976 individuals and 9,857,286 variants. We find the strongest enrichment for causality among non-synonymous variants (54× more likely to be causal, 1.4× larger effect sizes) and variants in transcription, repressed Polycomb, and enhancer regions, as well as identify five additional candidate loci beyond the 32 known AMD risk loci. In conclusion, our method is shown to efficiently integrate functional information in GWASs, helping identify functional associated-variants and underlying biology. Published by Elsevier Inc.

  13. Efficient sequential Bayesian inference method for real-time detection and sorting of overlapped neural spikes.

    Science.gov (United States)

    Haga, Tatsuya; Fukayama, Osamu; Takayama, Yuzo; Hoshino, Takayuki; Mabuchi, Kunihiko

    2013-09-30

    Overlapping of extracellularly recorded neural spike waveforms causes the original spike waveforms to become hidden and merged, confounding the real-time detection and sorting of these spikes. Methods proposed for solving this problem include using a multi-trode or placing a restriction on the complexity of overlaps. In this paper, we propose a rapid sequential method for the robust detection and sorting of arbitrarily overlapped spikes recorded with arbitrary types of electrodes. In our method, the probabilities of possible spike trains, including those that are overlapping, are evaluated by sequential Bayesian inference based on probabilistic models of spike-train generation and extracellular voltage recording. To reduce the high computational cost inherent in an exhaustive evaluation, candidates with low probabilities are considered as impossible candidates and are abolished at each sampling time to limit the number of candidates in the next evaluation. In addition, the data from a few subsequent sampling times are considered and used to calculate the "look-ahead probability", resulting in improved calculation efficiency due to a more rapid elimination of candidates. These sufficiently reduce computational time to enable real-time calculation without impairing performance. We assessed the performance of our method using simulated neural signals and actual neural signals recorded in primary cortical neurons cultured on a multi-electrode array. Our results demonstrated that our computational method could be applied in real-time with a delay of less than 10 ms. The estimation accuracy was higher than that of a conventional spike sorting method, particularly for signals with multiple overlapping spikes. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. BAYESIAN DATA AUGMENTATION DOSE FINDING WITH CONTINUAL REASSESSMENT METHOD AND DELAYED TOXICITY

    Science.gov (United States)

    Liu, Suyu; Yin, Guosheng; Yuan, Ying

    2014-01-01

    A major practical impediment when implementing adaptive dose-finding designs is that the toxicity outcome used by the decision rules may not be observed shortly after the initiation of the treatment. To address this issue, we propose the data augmentation continual re-assessment method (DA-CRM) for dose finding. By naturally treating the unobserved toxicities as missing data, we show that such missing data are nonignorable in the sense that the missingness depends on the unobserved outcomes. The Bayesian data augmentation approach is used to sample both the missing data and model parameters from their posterior full conditional distributions. We evaluate the performance of the DA-CRM through extensive simulation studies, and also compare it with other existing methods. The results show that the proposed design satisfactorily resolves the issues related to late-onset toxicities and possesses desirable operating characteristics: treating patients more safely, and also selecting the maximum tolerated dose with a higher probability. The new DA-CRM is illustrated with two phase I cancer clinical trials. PMID:24707327

  15. The Method of Oilfield Development Risk Forecasting and Early Warning Using Revised Bayesian Network

    Directory of Open Access Journals (Sweden)

    Yihua Zhong

    2016-01-01

    Full Text Available Oilfield development aiming at crude oil production is an extremely complex process, which involves many uncertain risk factors affecting oil output. Thus, risk prediction and early warning about oilfield development may insure operating and managing oilfields efficiently to meet the oil production plan of the country and sustainable development of oilfields. However, scholars and practitioners in the all world are seldom concerned with the risk problem of oilfield block development. The early warning index system of blocks development which includes the monitoring index and planning index was refined and formulated on the basis of researching and analyzing the theory of risk forecasting and early warning as well as the oilfield development. Based on the indexes of warning situation predicted by neural network, the method dividing the interval of warning degrees was presented by “3σ” rule; and a new method about forecasting and early warning of risk was proposed by introducing neural network to Bayesian networks. Case study shows that the results obtained in this paper are right and helpful to the management of oilfield development risk.

  16. Improving Biochemical Named Entity Recognition Using PSO Classifier Selection and Bayesian Combination Methods.

    Science.gov (United States)

    Akkasi, Abbas; Varoglu, Ekrem

    2017-01-01

    Named Entity Recognition (NER) is a basic step for large number of consequent text mining tasks in the biochemical domain. Increasing the performance of such recognition systems is of high importance and always poses a challenge. In this study, a new community based decision making system is proposed which aims at increasing the efficiency of NER systems in the chemical/drug name context. Particle Swarm Optimization (PSO) algorithm is chosen as the expert selection strategy along with the Bayesian combination method to merge the outputs of the selected classifiers as well as evaluate the fitness of the selected candidates. The proposed system performs in two steps. The first step focuses on creating various numbers of baseline classifiers for NER with different features sets using the Conditional Random Fields (CRFs). The second step involves the selection and efficient combination of the classifiers using PSO and Bayesisan combination. Two comprehensive corpora from BioCreative events, namely ChemDNER and CEMP, are used for the experiments conducted. Results show that the ensemble of classifiers selected by means of the proposed approach perform better than the single best classifier as well as ensembles formed using other popular selection/combination strategies for both corpora. Furthermore, the proposed method outperforms the best performing system at the Biocreative IV ChemDNER track by achieving an F-score of 87.95 percent.

  17. Assessment of Agricultural Water Management in Punjab, India using Bayesian Methods

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.; Sidhu, R.

    2013-12-01

    The success of the Green Revolution in Punjab, India is threatened by the declining water table (approx. 1 m/yr). Punjab, a major agricultural supplier for the rest of India, supports irrigation with a canal system and groundwater, which is vastly over-exploited. Groundwater development in many districts is greater than 200% the annual recharge rate. The hydrologic data required to complete a mass-balance model are not available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements. Using the known values of precipitation, total canal water delivery, crop yield, and water table elevation, we solve for each unknown parameter (often a coefficient) using a Markov chain Monte Carlo (MCMC) algorithm. Results provide regional estimates of irrigation requirements and groundwater recharge rates under observed climate conditions (1972 to 2002). Model results are used to estimate future water availability and demand to help inform agriculture management decisions under projected climate conditions. We find that changing cropping patterns for the region can maintain food production while balancing groundwater pumping with natural recharge. This computational method can be applied in data-scarce regions across the world, where agricultural water management is required to resolve competition between food security and changing resource availability.

  18. A simple method to optimize the HSCCC two-phase solvent system by predicting the partition coefficient for target compound.

    Science.gov (United States)

    Han, Quan-Bin; Wong, Lina; Yang, Nian-Yun; Song, Jing-Zheng; Qiao, Chun-Feng; Yiu, Hillary; Ito, Yoichiro; Xu, Hong-Xi

    2008-04-01

    A simple method was developed to optimize the solvent ratio of the two-phase solvent system used in the high-speed counter-current chromatography (HSCCC) separation. Some mathematic equations, such as the exponential and the power equations, were established to describe the relationship between the solvent ratio and the partition coefficient. Using this new method, the two-phase solvent system was easily optimized to obtain a proper partition coefficient for the CCC separation of the target compound. Furthermore, this method was satisfactorily applied in determining the two-phase solvent system for the HSCCC preparation of pseudolaric acid B from the Chinese herb Pseudolarix kaempferi Gordon (Pinaceae). The two-phase solvent system of n-hexane/EtOAc/MeOH/H(2)O (5:5:5:5 by volume) was used with a good partition coefficient K = 1.08. As a result, 232.05 mg of pseudolaric acid B was yielded from 0.5 g of the crude extract with a purity of 97.26% by HPLC analysis.

  19. A novel Bayesian DNA motif comparison method for clustering and retrieval.

    Directory of Open Access Journals (Sweden)

    Naomi Habib

    2008-02-01

    Full Text Available Characterizing the DNA-binding specificities of transcription factors is a key problem in computational biology that has been addressed by multiple algorithms. These usually take as input sequences that are putatively bound by the same factor and output one or more DNA motifs. A common practice is to apply several such algorithms simultaneously to improve coverage at the price of redundancy. In interpreting such results, two tasks are crucial: clustering of redundant motifs, and attributing the motifs to transcription factors by retrieval of similar motifs from previously characterized motif libraries. Both tasks inherently involve motif comparison. Here we present a novel method for comparing and merging motifs, based on Bayesian probabilistic principles. This method takes into account both the similarity in positional nucleotide distributions of the two motifs and their dissimilarity to the background distribution. We demonstrate the use of the new comparison method as a basis for motif clustering and retrieval procedures, and compare it to several commonly used alternatives. Our results show that the new method outperforms other available methods in accuracy and sensitivity. We incorporated the resulting motif clustering and retrieval procedures in a large-scale automated pipeline for analyzing DNA motifs. This pipeline integrates the results of various DNA motif discovery algorithms and automatically merges redundant motifs from multiple training sets into a coherent annotated library of motifs. Application of this pipeline to recent genome-wide transcription factor location data in S. cerevisiae successfully identified DNA motifs in a manner that is as good as semi-automated analysis reported in the literature. Moreover, we show how this analysis elucidates the mechanisms of condition-specific preferences of transcription factors.

  20. A Robust Computational Method for Coupled Liquid-liquid Phase Separation and Gas-particle Partitioning Predictions of Multicomponent Aerosols

    Science.gov (United States)

    Zuend, A.; Di Stefano, A.

    2014-12-01

    Providing efficient and reliable model predictions for the partitioning of atmospheric aerosol components between different phases (gas, liquids, solids) is a challenging problem. The partitioning of water, various semivolatile organic components, inorganic acids, bases, and salts, depends simultaneously on the chemical properties and interaction effects among all constituents of a gas + aerosol system. The effects of hygroscopic particle growth on the water contents and physical states of potentially two or more liquid and/or solid aerosol phases in turn may significantly affect multiphase chemistry, the direct effect of aerosols on climate, and the ability of specific particles to act as cloud condensation or ice nuclei. Considering the presence of a liquid-liquid phase separation in aerosol particles, which typically leads to one phase being enriched in rather hydrophobic compounds and the other phase enriched in water and dissolved electrolytes, adds a high degree of complexity to the goal of predicting the gas-particle partitioning of all components. Coupled gas-particle partitioning and phase separation methods are required to correctly account for the phase behaviour of aerosols exposed to varying environmental conditions, such as changes to relative humidity. We present new theoretical insights and a substantially improved algorithm for the reliable prediction of gas-particle partitioning at thermodynamic equilibrium based on the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) model. We introduce a new approach for the accurate prediction of the phase distribution of multiple inorganic ions between two liquid phases, constrained by charge balance, and the coupling of the liquid-liquid equilibrium model to a robust gas-particle partitioning algorithm. Such coupled models are useful for exploring the range of environmental conditions leading to complete or incomplete miscibility of aerosol constituents which will affect

  1. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  2. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    Science.gov (United States)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  3. Development of A Bayesian Geostatistical Data Assimilation Method and Application to the Hanford 300 Area

    Science.gov (United States)

    Murakami, Haruko

    Probabilistic risk assessment of groundwater contamination requires us to incorporate large and diverse datasets at the site into the stochastic modeling of flow and transport for prediction. In quantifying the uncertainty in our predictions, we must not only combine the best estimates of the parameters based on each dataset, but also integrate the uncertainty associated with each dataset caused by measurement errors and limited number of measurements. This dissertation presents a Bayesian geostatistical data assimilation method that integrates various types of field data for characterizing heterogeneous hydrological properties. It quantifies the parameter uncertainty as a posterior distribution conditioned on all the datasets, which can be directly used in stochastic simulations to compute possible outcomes of flow and transport processes. The goal of this framework is to remove the discontinuity between data analysis and prediction. Such a direct connection between data and prediction also makes it possible to evaluate the worth of each dataset or combined worth of multiple datasets. The synthetic studies described here confirm that the data assimilation method introduced in this dissertation successfully captures the true parameter values and predicted values within the posterior distribution. The shape of the inferred posterior distributions from the method indicates the importance of estimating the entire distribution in fully accounting for parameter uncertainty. The method is then applied to integrate multiple types of datasets at the Hanford 300 Area for characterizing a three-dimensional heterogeneous hydraulic conductivity field. Comparing the results based on the different numbers or combinations of datasets shows that increasing data do not always contribute in a straightforward way to improving the posterior distribution: increasing numbers of the same data type would not necessarily be beneficial above a certain number, and also the combined effect of

  4. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    Science.gov (United States)

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  5. Prediction of Nepsilon-acetylation on internal lysines implemented in Bayesian Discriminant Method.

    Science.gov (United States)

    Li, Ao; Xue, Yu; Jin, Changjiang; Wang, Minghui; Yao, Xuebiao

    2006-12-01

    Protein acetylation is an important and reversible post-translational modification (PTM), and it governs a variety of cellular dynamics and plasticity. Experimental identification of acetylation sites is labor-intensive and often limited by the availability of reagents such as acetyl-specific antibodies and optimization of enzymatic reactions. Computational analyses may facilitate the identification of potential acetylation sites and provide insights into further experimentation. In this manuscript, we present a novel protein acetylation prediction program named PAIL, prediction of acetylation on internal lysines, implemented in a BDM (Bayesian Discriminant Method) algorithm. The accuracies of PAIL are 85.13%, 87.97%, and 89.21% at low, medium, and high thresholds, respectively. Both Jack-Knife validation and n-fold cross-validation have been performed to show that PAIL is accurate and robust. Taken together, we propose that PAIL is a novel predictor for identification of protein acetylation sites and may serve as an important tool to study the function of protein acetylation. PAIL has been implemented in PHP and is freely available on a web server at: http://bioinformatics.lcd-ustc.org/pail.

  6. Prediction of Nε-acetylation on internal lysines implemented in Bayesian Discriminant Method

    Science.gov (United States)

    Li, Ao; Xue, Yu; Jin, Changjiang; Wang, Minghui; Yao, Xuebiao

    2007-01-01

    Protein acetylation is an important and reversible post-translational modification (PTM), and it governs a variety of cellular dynamics and plasticity. Experimental identification of acetylation sites is labor-intensive and often limited by the availability reagents such as acetyl-specific antibodies and optimization of enzymatic reactions. Computational analyses may facilitate the identification of potential acetylation sites and provide insights into further experimentation. In this manuscript, we present a novel protein acetylation prediction program named PAIL, prediction of acetylation on internal lysines, implemented in a BDM (Bayesian Discriminant Method) algorithm. The accuracies of PAIL are 85.13%, 87.97% and 89.21% at low, medium and high thresholds, respectively. Both Jack-Knife validation and n-fold cross validation have been performed to show that PAIL is accurate and robust. Taken together, we propose that PAIL is a novel predictor for identification of protein acetylation sites and may serve as an important tool to study the function of protein acetylation. PAIL has been implemented in PHP and is freely available on a web server at: http://bioinformatics.lcd-ustc.org/pail. PMID:17045240

  7. Analyses of growth curves of Nellore cattle by Bayesian method via Gibbs sampling

    Directory of Open Access Journals (Sweden)

    Nobre P.R.C.

    2003-01-01

    Full Text Available Growth curves of Nellore cattle were analyzed using body weights measured at ages ranging from 1 day (birth weight to 733 days. Traits considered were birth weight, 10 to 110 days weight, 102 to 202 days weight, 193 to 293 days weight, 283 to 383 days weight, 376 to 476 days weight, 551 to 651 days weight, and 633 to 733 days weight. Two data samples were created: one with 79,849 records from herds that had missing traits and another with 74,601 from herds with no missing traits. Records preadjusted to a fixed age were analyzed by a multiple trait model (MTM, which included the effects of contemporary group, age of dam class, additive direct, additive maternal, and maternal permanent environment. Analyses were carried out by a Bayesian method for all nine traits. The random regression model (RRM included the effects of age of animal, contemporary group, age of dam class, additive direct, permanent environment, additive maternal, and maternal permanent environment. Legendre cubic polynomials were used to describe random effects. MTM estimated covariance components and genetic parameters for birth weight and sequential weights and RRM for all ages. Due to the fact that covariance components based on RRM were inflated for herds with missing traits, MTM should be used and converted to covariance functions.

  8. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  9. Simple Method to Determine the Partition Coefficient of Naphthenic Acid in Oil/Water

    DEFF Research Database (Denmark)

    Bitsch-Larsen, Anders; Andersen, Simon Ivar

    2008-01-01

    The partition coefficient for technical grade naphthenic acid in water/n-decane at 295 K has been determined (K-wo = 2.1 center dot 10(-4)) using a simple experimental technique with large extraction volumes (0.09 m(3) of water). Furthermore, nonequilibrium values at different pH values are prese...... are presented. Analysis of the acid content in the oil phase was conducted by FT-IR and colormetric titration and found to be equivalent....

  10. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  11. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  12. Combinatorics of set partitions

    CERN Document Server

    Mansour, Toufik

    2012-01-01

    Focusing on a very active area of mathematical research in the last decade, Combinatorics of Set Partitions presents methods used in the combinatorics of pattern avoidance and pattern enumeration in set partitions. Designed for students and researchers in discrete mathematics, the book is a one-stop reference on the results and research activities of set partitions from 1500 A.D. to today. Each chapter gives historical perspectives and contrasts different approaches, including generating functions, kernel method, block decomposition method, generating tree, and Wilf equivalences. Methods and d

  13. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  14. Orbits for the Impatient: A Bayesian Rejection Sampling Method for Quickly Fitting the Orbits of Long-Period Exoplanets

    OpenAIRE

    Blunt, Sarah; Nielsen, Eric L.; De Rosa, Robert J.; Konopacky, Quinn M.; Ryan, Dominic; Wang, Jason J.; Pueyo, Laurent; Rameau, Julien; Marois, Christian; Marchis, Franck; Macintosh, Bruce; Graham, James R.; Duchene, Gaspard; Schneider, Adam C.

    2017-01-01

    We describe a Bayesian rejection sampling algorithm designed to efficiently compute posterior distributions of orbital elements for data covering short fractions of long-period exoplanet orbits. Our implementation of this method, Orbits for the Impatient (OFTI), converges up to several orders of magnitude faster than two implementations of MCMC in this regime. We illustrate the efficiency of our approach by showing that OFTI calculates accurate posteriors for all existing astrometry of the ex...

  15. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    Science.gov (United States)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  16. A method of spherical harmonic analysis in the geosciences via hierarchical Bayesian inference

    Science.gov (United States)

    Muir, J. B.; Tkalčić, H.

    2015-11-01

    The problem of decomposing irregular data on the sphere into a set of spherical harmonics is common in many fields of geosciences where it is necessary to build a quantitative understanding of a globally varying field. For example, in global seismology, a compressional or shear wave speed that emerges from tomographic images is used to interpret current state and composition of the mantle, and in geomagnetism, secular variation of magnetic field intensity measured at the surface is studied to better understand the changes in the Earth's core. Optimization methods are widely used for spherical harmonic analysis of irregular data, but they typically do not treat the dependence of the uncertainty estimates on the imposed regularization. This can cause significant difficulties in interpretation, especially when the best-fit model requires more variables as a result of underestimating data noise. Here, with the above limitations in mind, the problem of spherical harmonic expansion of irregular data is treated within the hierarchical Bayesian framework. The hierarchical approach significantly simplifies the problem by removing the need for regularization terms and user-supplied noise estimates. The use of the corrected Akaike Information Criterion for picking the optimal maximum degree of spherical harmonic expansion and the resulting spherical harmonic analyses are first illustrated on a noisy synthetic data set. Subsequently, the method is applied to two global data sets sensitive to the Earth's inner core and lowermost mantle, consisting of PKPab-df and PcP-P differential traveltime residuals relative to a spherically symmetric Earth model. The posterior probability distributions for each spherical harmonic coefficient are calculated via Markov Chain Monte Carlo sampling; the uncertainty obtained for the coefficients thus reflects the noise present in the real data and the imperfections in the spherical harmonic expansion.

  17. Application of Bayesian methods to habitat selection modeling of the northern spotted owl in California: new statistical methods for wildlife research

    Science.gov (United States)

    Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk

    2005-01-01

    We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...

  18. Method for Building a Medical Training Simulator with Bayesian Networks: SimDeCS.

    Science.gov (United States)

    Flores, Cecilia Dias; Fonseca, João Marcelo; Bez, Marta Rosecler; Respício, Ana; Coelho, Helder

    2014-01-01

    Distance education has grown in importance with the advent of the internet. An adequate evaluation of students in this mode is still difficult. Distance tests or occasional on-site exams do not meet the needs of evaluation of the learning process for distance education. Bayesian networks are adequate for simulating several aspects of clinical reasoning. The possibility of integrating them in distance education student evaluation has not yet been explored much. The present work describes a Simulator based on probabilistic networks built to represent knowledge of clinical practice guidelines in Family and Community Medicine. The Bayesian Network, the basis of the simulator, was modeled to playable by the student, to give immediate feedback according to pedagogical strategies adapted to the student according to past performance, and to give a broad evaluation of performance at the end of the game. Simulators structured by Bayesian Networks may become alternatives in the evaluation of students of Medical Distance Education.

  19. Fault Localization Method by Partitioning Memory Using Memory Map and the Stack for Automotive ECU Software Testing

    Directory of Open Access Journals (Sweden)

    Kwanhyo Kim

    2016-09-01

    Full Text Available Recently, the usage of the automotive Electronic Control Unit (ECU and its software in cars is increasing. Therefore, as the functional complexity of such software increases, so does the likelihood of software-related faults. Therefore, it is important to ensure the reliability of ECU software in order to ensure automobile safety. For this reason, systematic testing methods are required that can guarantee software quality. However, it is difficult to locate a fault during testing with the current ECU development system because a tester performs the black-box testing using a Hardware-in-the-Loop (HiL simulator. Consequently, developers consume a large amount of money and time for debugging because they perform debugging without any information about the location of the fault. In this paper, we propose a method for localizing the fault utilizing memory information during black-box testing. This is likely to be of use to developers who debug automotive software. In order to observe whether symbols stored in the memory have been updated, the memory is partitioned by a memory map and the stack, thus the fault candidate region is reduced. A memory map method has the advantage of being able to finely partition the memory, and the stack method can partition the memory without a memory map. We validated these methods by applying these to HiL testing of the ECU for a body control system. The preliminary results indicate that a memory map and the stack reduce the possible fault locations to 22% and 19% of the updated memory, respectively.

  20. Implementing statistical learning methods through Bayesian networks (Part 2): Bayesian evaluations for results of black toner analyses in forensic document examination.

    Science.gov (United States)

    Biedermann, A; Taroni, F; Bozza, S; Mazzella, W D

    2011-01-30

    This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Estimates of European emissions of methyl chloroform using a Bayesian inversion method

    Science.gov (United States)

    Maione, M.; Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Blake, D. R.; Bonasoni, P.; Fang, X.; Montzka, S. A.; O'Doherty, S. J.; Reimann, S.; Stohl, A.; Vollmer, M. K.

    2014-09-01

    Methyl chloroform (MCF) is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the "Montreal Protocol on Substances that Deplete the Ozone Layer" and its amendments, which called for its phase-out in 1996 in developed countries and 2015 in developing countries. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline in the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes, suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified southeastern France (SEF) as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an 11-year data set, from January 2002 to December 2012. Overall, emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr-1 in 2002 to 0.32 Gg yr-1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr-1 in 2002 and 0.20 Gg yr-1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30-90° N) emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible for a minimum of 2.6% (in 2003) and a maximum of 10.3% (in 2009) of the global MCF emissions.

  2. Assessment of Earthquake Hazard Parameters with Bayesian Approach Method Around Karliova Triple Junction, Eastern Turkey

    Science.gov (United States)

    Türker, Tugba; Bayrak, Yusuf

    2017-12-01

    In this study, the Bayesian Approach method is used to evaluate earthquake hazard parameters of maximum regional magnitude (Mmax), β value, and seismic activity rate or intensity (λ) and their uncertainties for next 5, 10, 25, 50, 100 years around Karlıova Triple Junction (KTJ). A compiled earthquake catalog that is homogenous for Ms ≥ 3.0 was completed during the period from 1900 to 2017. We are divided into four different seismic source regions based on epicenter distribution, tectonic, seismicity, faults around KTJ. We two historical earthquakes (1866, Ms=7.2 for Region 3 (Between Bingöl-Karlıova-Muş-Bitlis (Bahçeköy Fault Zone-Uzunpınar Fault Zone-Karakoçan Fault-Muę Fault Zones -Kavakbaşı Fault)) and 1874, Ms=7.1 for Region 4 (Between Malatya-Elaziğ-Tunceli (Palu Basin-Pütürge Basin-Erkenek Fault-Malatya Fault)) are included around KTJ. The computed Mmax values are between 7.71 and 8.17. The quantiles of functions of distributions of true and apparent magnitude on a given time interval [0, T] are evaluated. The quantiles of functions of distributions of apparent and true magnitudes for next time intervals of 5, 10, 25, 50, and 100 years are calculated for confidence limits of probability levels of 50, 70, and 90 % around KTJ. According to the computed earthquake hazard parameters, Erzincan Basin-Ovacık Fault-Pülümur Fault-Yedisu Basin region was the most seismic active regions of KTJ. Erzincan Basin-Ovacik Fault-Pulumur Fault-Yedisu Basin region is estimated the highest earthquake magnitude 7.16 with a 90 % probability level in the next 100 years which the most dangerous region compared to other regions. The results of this study can be used in earthquake hazard studies of the East Anatolian region.

  3. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  4. A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS

    Science.gov (United States)

    A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...

  5. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  6. A simulated annealing-based method for learning Bayesian networks from statistical data

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin; Nielsen, Jan

    2006-01-01

    Roč. 21, č. 3 (2006), s. 335-348 ISSN 0884-8173 R&D Projects: GA ČR GA201/03/0478 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian network * simulated annealing * Markov Chain Monte Carlo Subject RIV: BA - General Mathematics Impact factor: 0.429, year: 2006

  7. Spatial Bayesian methods of forecasting house prices in six metropolitan areas of South Africa

    CSIR Research Space (South Africa)

    Gupta, R

    2008-06-01

    Full Text Available :07 to 2005:06. The authors then forecast one- to six-months-ahead house prices over the forecast horizon of 2005:07 to 2007:06. They then compare forecasts generated from the SBVAR's with those from an unrestricted Vector Autoregressive (VAR) and the Bayesian...

  8. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  9. Genetic Properties of Some Economic Traits in Isfahan Native Fowl Using Bayesian and REML Methods

    Directory of Open Access Journals (Sweden)

    Salehinasab M

    2015-12-01

    Full Text Available The objective of the present study was to estimate heritability values for some performance and egg quality traits of native fowl in Isfahan breeding center using REML and Bayesian approaches. The records were about 51521 and 975 for performance and egg quality traits, respectively. At the first step, variance components were estimated for body weight at hatch (BW0, body weight at 8 weeks of age (BW8, weight at sexual maturity (WSM, egg yolk weight (YW, egg Haugh unit and eggshell thickness, via REML approach using ASREML software. At the second step, the same traits were analyzed via Bayesian approach using Gibbs3f90 software. In both approaches six different animal models were applied and the best model was determined using likelihood ratio test (LRT and deviance information criterion (DIC for REML and Bayesian approaches, respectively. Heritability estimates for BW0, WSM and ST were the same in both approaches. For BW0, LRT and DIC indexes confirmed that the model consisting maternal genetic, permanent environmental and direct genetic effects was significantly better than other models. For WSM, a model consisting of maternal permanent environmental effect in addition to direct genetic effect was the best. For shell thickness, the basic model consisting direct genetic effect was the best. The results for BW8, YW and Haugh unit, were different between the two approaches. The reason behind this tiny differences was that the convergence could not be achieved for some models in REML approach and thus for these traits the Bayesian approach estimated the variance components more accurately. The results indicated that ignoring maternal effects, overestimates the direct genetic variance and heritability for most of the traits. Also, the Bayesian-based software could take more variance components into account.

  10. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  11. A Bayesian method for estimating prevalence in the presence of a hidden sub-population.

    Science.gov (United States)

    Xia, Michelle; Gustafson, Paul

    2012-09-20

    When estimating the prevalence of a binary trait in a population, the presence of a hidden sub-population that cannot be sampled will lead to nonidentifiability and potentially biased estimation. We propose a Bayesian model of trait prevalence for a weighted sample from the non-hidden portion of the population, by modeling the relationship between prevalence and sampling probability. We studied the behavior of the posterior distribution on population prevalence, with the large-sample limits of posterior distributions obtained in simple analytical forms that give intuitively expected properties. We performed MCMC simulations on finite samples to evaluate the effectiveness of statistical learning. We applied the model and the results to two illustrative datasets arising from weighted sampling. Our work confirms that sensible results can be obtained using Bayesian analysis, despite the nonidentifiability in this situation. Copyright © 2012 John Wiley & Sons, Ltd.

  12. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    Science.gov (United States)

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Bayesian method for the synthesis of evidence from qualitative and quantitative reports: the example of antiretroviral medication adherence

    Science.gov (United States)

    Voils, Corrine I; Hassselblad, Vic; Crandell, Jamie L; Chang, YunKyung; Lee, EunJeong; Sandelowski, Margarete

    2009-01-01

    Objectives Bayesian meta-analysis is a frequently cited but very little-used method for synthesizing qualitative and quantitative research findings. The only example published to date used qualitative data to generate an informative prior probability and quantitative data to generate the likelihood. We developed a method to incorporate both qualitative and quantitative evidence in the likelihood in a Bayesian synthesis of evidence about the relationship between regimen complexity and medication adherence. Methods Data were from 11 qualitative and six quantitative studies. We updated two different non-informative prior distributions with qualitative and quantitative findings to find the posterior distribution for the probabilities that a more complex regimen was associated with lower adherence and that a less complex regimen was associated with greater adherence. Results The posterior mode for the qualitative findings regarding more complex regimen and lesser adherence (using the uniform prior with Jeffreys' prior yielding highly similar estimates) was 0.588 (95% credible set limits 0.519, 0.663) and for the quantitative findings was 0.224 (0.203, 0.245); due to non-overlapping credible sets, we did not combine them. The posterior mode for the qualitative findings regarding less complex regimen and greater adherence was 0.288 (0.214, 0.441) and for the quantitative findings was 0.272 (0.118, 0.437); the combined estimate was 0.299 (0.267, 0.334). Conclusions The utility of Bayesian methods for synthesizing qualitative and quantitative research findings at the participant level may depend on the nature of the relationship being synthesized and on how well the findings are represented in the individual reports. PMID:19770121

  14. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients.

    Science.gov (United States)

    Freitas, Alex A; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug's distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been discussed. The regression methods predict Vss using, as predictive features, both the compounds' molecular descriptors and the compounds' tissue:plasma partition coefficients (Kt:p) - often used in physiologically-based pharmacokinetics. Therefore, this work has assessed whether the data mining-based prediction of Vss can be made more accurate by using as input not only the compounds' molecular descriptors but also (a subset of) their predicted Kt:p values. Comparison of the models that used only molecular descriptors, in particular, the Bagging decision tree (mean fold error of 2.33), with those employing predicted Kt:p values in addition to the molecular descriptors, such as the Bagging decision tree using adipose Kt:p (mean fold error of 2.29), indicated that the use of predicted Kt:p values as descriptors may be beneficial for accurate prediction of Vss using decision trees if prior feature selection is applied. Decision tree based models presented in this work have an accuracy that is reasonable and similar to the accuracy of reported Vss inter-species extrapolations in the literature. The estimation of Vss for new compounds in drug discovery will benefit from methods that are able to integrate large and varied sources of data and flexible non-linear data mining methods such as decision trees, which can produce interpretable models. Graphical AbstractDecision trees for the prediction of tissue partition coefficient and volume of distribution of drugs.

  15. [Determination of partition coefficient of dissolved gases in transformer oil using phase ratio variation method and static headspace gas chromatography].

    Science.gov (United States)

    Zhao, Jinghong; Wang, Hailong; Liu, Wenmin; Zhou, Yansheng; Guan, Yafeng

    2004-05-01

    The partition coefficients of dissolved gases in transformer oil were determined using a phase ratio variation method and static headspace gas chromatography (GC). A pressure balancing and gas volume-metering device was connected to the vent of a sample loop on a six-port injection valve of the GC. The gas phase sample from the headspace vial of 25 mL was transferred to an 80 microL sample-loop through a fused silica capillary of 0.53 mm i.d., and then separated and determined quantitatively by GC. A 2 m x 1 mm i.d. GDX502 micro-packed column was used for the separation. Five different gas-liquid volume ratios in the headspace vials were measured at different equilibrium concentrations. The partition coefficients of hydrocarbon gases including methane, acetylene, ethylene, ethane and propane dissolved in transformer oil were determined by using linear regression analysis at 20 degrees C and 50 degrees C separately. The errors between the real values and regression values from experimental data were less than 4.14% except methane. Fundamental data for on-line measurement of dissolved gases in transformer oil are provided by GC.

  16. Removal of radionuclides from partitioning waste solutions by adsorption and catalytic oxidation methods

    Energy Technology Data Exchange (ETDEWEB)

    Yamagishi, Isao; Yamaguchi, Isoo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kubota, Masumitsu [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2000-09-01

    Adsorption of radionuclides with inorganic ion exchangers and catalytic oxidation of a complexant were studied for the decontamination of waste solutions generated in past partitioning tests with high-level liquid waste. Granulated ferrocyanide and titanic acid were used for adsorption of Cs and Sr, respectively, from an alkaline solution resulting from direct neutralization of an acidic waste solution. Both Na and Ba inhibited adsorption of Sr but Na did not that of Cs. These exchangers adsorbed Cs and Sr at low concentration with distribution coefficients of more than 10{sup 4}ml/g from 2M Na solution of pH11. Overall decontamination factors (DFs) of Cs and total {beta} nuclides exceeded 10{sup 5} and 10{sup 3}, respectively, at the neutralization-adsorption step of actual waste solutions free from a complexant. The DF of total {alpha} nuclides was less than 10{sup 3} for a waste solution containing diethylenetriaminepentaacetic acid (DTPA). DTPA was rapidly oxidized by nitric acid in the presence of a platinum catalyst, and radionuclides were removed as precipitates by neutralization of the resultant solution. The DF of {alpha} nuclides increased to 8x10{sup 4} by addition of the oxidation step. The DFs of Sb and Co were quite low through the adsorption step. A synthesized Ti-base exchanger (PTC) could remove Sb with the DF of more than 4x10{sup 3}. (author)

  17. Statistics: a Bayesian perspective

    National Research Council Canada - National Science Library

    Berry, Donald A

    1996-01-01

    ...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...

  18. Practical Bayesian tomography

    Science.gov (United States)

    Granade, Christopher; Combes, Joshua; Cory, D. G.

    2016-03-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of-the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we address all three problems. First, we use modern statistical methods, as pioneered by Huszár and Houlsby (2012 Phys. Rev. A 85 052120) and by Ferrie (2014 New J. Phys. 16 093035), to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first priors on quantum states and channels that allow for including useful experimental insight. Finally, we develop a method that allows tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  19. General Method of Using Bayesian Nets for a Software Reliability Assessment in Varying SW Development Life cycle

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Chang, Seung Cheol

    2008-01-01

    Bayesian Net (BN) has been used in many researches to predict software defects, because it allows all the evidence to be taken into account. However one of the serious difficulties in the earlier works was that the user had to build a different BN for each software development life cycle. This limits the practical use of BN in the field. One way to solve this problem is the use of general BN templates which are not restricted to a particular software life cycle. This paper describes a method for this purpose on the strength of Object- Oriented BN (OOBN) and Dynamic BN (DBN) technique

  20. A combined evidence Bayesian method for human ancestry inference applied to Afro-Colombians.

    Science.gov (United States)

    Rishishwar, Lavanya; Conley, Andrew B; Vidakovic, Brani; Jordan, I King

    2015-12-15

    Uniparental genetic markers, mitochondrial DNA (mtDNA) and Y chromosomal DNA, are widely used for the inference of human ancestry. However, the resolution of ancestral origins based on mtDNA haplotypes is limited by the fact that such haplotypes are often found to be distributed across wide geographical regions. We have addressed this issue here by combining two sources of ancestry information that have typically been considered separately: historical records regarding population origins and genetic information on mtDNA haplotypes. To combine these distinct data sources, we applied a Bayesian approach that considers historical records, in the form of prior probabilities, together with data on the geographical distribution of mtDNA haplotypes, formulated as likelihoods, to yield ancestry assignments from posterior probabilities. This combined evidence Bayesian approach to ancestry assignment was evaluated for its ability to accurately assign sub-continental African ancestral origins to Afro-Colombians based on their mtDNA haplotypes. We demonstrate that the incorporation of historical prior probabilities via this analytical framework can provide for substantially increased resolution in sub-continental African ancestry assignment for members of this population. In addition, a personalized approach to ancestry assignment that involves the tuning of priors to individual mtDNA haplotypes yields even greater resolution for individual ancestry assignment. Despite the fact that Colombia has a large population of Afro-descendants, the ancestry of this community has been understudied relative to populations with primarily European and Native American ancestry. Thus, the application of the kind of combined evidence approach developed here to the study of ancestry in the Afro-Colombian population has the potential to be impactful. The formal Bayesian analytical framework we propose for combining historical and genetic information also has the potential to be widely applied

  1. Non-iterative sampling-based Bayesian methods for identifying changepoints in the sequence of cases of haemolytic uraemic syndrome.

    Science.gov (United States)

    Tian, Guo-Liang; Ng, Kai Wang; Li, Kai-Can; Tan, Ming

    2009-07-01

    Diarrhoea-associated Haemolytic uraemic syndrome (HUS) is a disease that affects the kidneys and other organs. Motivated by the annual number of cases of HUS collected in Birmingham and Newcastle of England, respectively, from 1970 to 1989, we consider Bayesian changepoint analysis with specific attention to Poisson changepoint models. For changepoint models with unknown number of changepoints, we propose a new non-iterative Bayesian sampling approach (called exact IBF sampling), which completely avoids the problem of convergence and slow convergence associated with iterative Markov chain Monte Carlo (MCMC) methods. The idea is to first utilize the sampling inverse Bayes formula (IBF) to derive the conditional distribution of the latent data given the observed data, and then to draw iid samples from the complete-data posterior distribution. For the purpose of selecting the appropriate model (or determining the number of changepoints), we develop two alternative formulae to exactly calculate marginal likelihood (or Bayes factor) by using the exact IBF output and the point-wise IBF, respectively. The HUS data are re-analyzed using the proposed methods. Simulations are implemented to validate the performance of the proposed methods.

  2. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  3. On-the-fly analysis of molecular dynamics simulation trajectories of proteins using the Bayesian inference method

    Science.gov (United States)

    Miyashita, Naoyuki; Yonezawa, Yasushige

    2017-09-01

    Robust and reliable analyses of long trajectories from molecular dynamics simulations are important for investigations of functions and mechanisms of proteins. Structural fitting is necessary for various analyses of protein dynamics, thus removing time-dependent translational and rotational movements. However, the fitting is often difficult for highly flexible molecules. Thus, to address the issues, we proposed a fitting algorithm that uses the Bayesian inference method in combination with rotational fitting-weight improvements, and the well-studied globular protein systems trpcage and lysozyme were used for investigations. The present method clearly identified rigid core regions that fluctuate less than other regions and also separated core regions from highly fluctuating regions with greater accuracy than conventional methods. Our method also provided simultaneous variance-covariance matrix elements composed of atomic coordinates, allowing us to perform principle component analysis and prepare domain cross-correlation map during molecular dynamics simulations in an on-the-fly manner.

  4. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  5. Water isotope partitioning and ecohydrologic separation in mixed conifer forest explored with a centrifugation water extraction method

    Science.gov (United States)

    Bowers, W.; Mercer, J.; Pleasants, M.; Williams, D. G.

    2017-12-01

    Isotopic partitioning of water within soil into tightly and loosely bound fractions has been proposed to explain differences between isotopic water sources used by plants and those that contribute to streams and ground water, the basis for the "two water worlds" hypothesis. We examined the isotope ratio values of water in trees, bulk soil, mobile water collected from soil lysimeters, stream water, and GW at three different hillslopes in a mixed conifer forest in southeastern Wyoming, USA. Hillslopes differed in aspect and topographic position with corresponding differences in surface energy balance, snowmelt timing, and duration of soil moisture during the dry summer. The isotopic results support the partitioning of water within the soil; trees apparently used a different pool of water for transpiration than that recovered from soil lysimeters and the source was not resolved with the isotopic signature of the water that was extracted from bulk soil via cryogenic vacuum distillation. Separating and measuring the isotope ratios values in these pools would test the assumption that the tightly bound water within the soil has the same isotopic signature as the water transpired by the trees. We employed a centrifugation approach to separate water within the soil held at different tensions by applying stepwise increases in rotational velocity and pressures to the bulk soil samples. Effluent and the remaining water (cryogenically extracted) at each step were compared. We first applied the centrifugation method in a simple lab experiment using sandy loam soil and separate introductions of two isotopically distinct waters. We then applied the method to soil collected from the montane hillslopes. For the lab experiment, we predicted that effluents would have distinct isotopic signatures, with the last effluent and extracted water more closely representing the isotopic signature of the first water applied. For our field samples, we predicted that the isotopic signature of the

  6. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    Science.gov (United States)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  7. A case study of an enhanced eutrophication model with stoichiometric zooplankton growth sub-model calibrated by Bayesian method.

    Science.gov (United States)

    Yang, Likun; Peng, Sen; Sun, Jingmei; Zhao, Xinhua; Li, Xia

    2016-05-01

    Urban lakes in China have suffered from severe eutrophication over the past several years, particularly those with relatively small areas and closed watersheds. Many efforts have been made to improve the understanding of eutrophication physiology with advanced mathematical models. However, several eutrophication models ignore zooplankton behavior and treat zooplankton as particles, which lead to the systematic errors. In this study, an eutrophication model was enhanced with a stoichiometric zooplankton growth sub-model that simulated the zooplankton predation process and the interplay among nitrogen, phosphorus, and oxygen cycles. A case study in which the Bayesian method was used to calibrate the enhanced eutrophication model parameters and to calculate the model simulation results was carried out in an urban lake in Tianjin, China. Finally, a water quality assessment was also conducted for eutrophication management. Our result suggests that (1) integration of the Bayesian method and the enhanced eutrophication model with a zooplankton feeding behavior sub-model can effectively depict the change in water quality and (2) the nutrients resulting from rainwater runoff laid the foundation for phytoplankton bloom.

  8. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  9. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  10. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available ) and Switched Column-Updating Method (SCU) 1. The Column-Updating method is a quasi-Newton method that was introduced by Martinez [25, 27, 28]. The rank-one update of this method is such that the column of the approximate Jacobian corresponding to the largest...K,s = Argmax{|〈ı j,δxs〉|; j = 1, . . . ,mn}. (Kˆ′1) −1 is typically set to be −I, 2. The Inverse Column-Updating method (ICU) is a quasi-Newton method that was introduced by Martinez and Zambaldi [23, 26]. It uses a rank-one update such that the column...

  11. 40 CFR 799.6755 - TSCA partition coefficient (n-octanol/water), shake flask method.

    Science.gov (United States)

    2010-07-01

    ... organometallic compounds. (4) Alternative methods. High-pressure liquid chromatography (HPLC) methods described... alternative test method. (c) Method—(1) Introduction, purpose, scope, relevance, application, and limits of... from an ion exchanger should not be used. (iii) Presaturation of the solvents. Before a P is determined...

  12. Bayesian methods for the physical sciences learning from examples in astronomy and physics

    CERN Document Server

    Andreon, Stefano

    2015-01-01

    Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.

  13. An Identification of Tuberculosis (Tb Disease in Humans using Naïve Bayesian Method

    Directory of Open Access Journals (Sweden)

    Agustin Trihartati S.

    2016-11-01

    Full Text Available Tuberculosis (TB is a disease that can cause a death if not recognized or not treated properly. To reduce the death rate of tuberculosis patients, the health experts need to diagnose that disease as early as possible. Based on the main indication data, laboratory test results and the  rontgen photo, Naïve Bayesian approach in data mining techniques could be optimized to diagnose tuberculosis. Naïve Bayes classifiers predict class membership probabilities with a class that has the highest probability value. The output of the system is an identification Tuberculosis type of the patients. Testing of the system using 237 data sample with variation of cross-validation in 3, 5, 7 and 9-fold cross validation gives an average accuracy 85,95%.

  14. Bayesian methods for the combination of core sampling data with historical models for tank characterization

    International Nuclear Information System (INIS)

    York, J.C.; Remund, K.M.; Chen, G.; Simpson, B.C.; Brown, T.M.

    1995-07-01

    A wide variety of information is available on the contents of the nuclear waste tanks at the Hanford site. This report describes an attempt to combine several sources of information using a Bayesian statistical approach. This methodology allows the combination of multiple disparate information sources. After each source of information is summarized in terms of a probability distribution function (pdf), Bayes' theorem is applied to combine them. This approach has been applied to characterizing tanks B-110, B-111, and B-201. These tanks were chosen for their simple waste matrices: B-110 and B-111 contain mostly 2C waste, and B-201 contains mostly 224 waste. Additionally,, the results of this analysis axe used to make predictions for tank T-111 (which contains both 2C and 224 waste). These predictions are compared to the estimates based on core samples from tank T-111

  15. BayesWHAM: A Bayesian approach for free energy estimation, reweighting, and uncertainty quantification in the weighted histogram analysis method.

    Science.gov (United States)

    Ferguson, Andrew L

    2017-07-05

    The weighted histogram analysis method (WHAM) is a powerful approach to estimate molecular free energy surfaces (FES) from biased simulation data. Bayesian reformulations of WHAM are valuable in proving statistically optimal use of the data and providing a transparent means to incorporate regularizing priors and estimate statistical uncertainties. In this work, we develop a fully Bayesian treatment of WHAM to generate statistically optimal FES estimates in any number of biasing dimensions under arbitrary choices of the Bayes prior. Rigorous uncertainty estimates are generated by Metropolis-Hastings sampling from the Bayes posterior. We also report a means to project the FES and its uncertainties into arbitrary auxiliary order parameters beyond those in which biased sampling was conducted. We demonstrate the approaches in applications of alanine dipeptide and the unthreading of a synthetic mimic of the astexin-3 lasso peptide. Open-source MATLAB and Python implementations of our codes are available for free public download. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. A Bayesian method to mine spatial data sets to evaluate the vulnerability of human beings to catastrophic risk.

    Science.gov (United States)

    Li, Lianfa; Wang, Jinfeng; Leung, Hareton; Zhao, Sisi

    2012-06-01

    Vulnerability of human beings exposed to a catastrophic disaster is affected by multiple factors that include hazard intensity, environment, and individual characteristics. The traditional approach to vulnerability assessment, based on the aggregate-area method and unsupervised learning, cannot incorporate spatial information; thus, vulnerability can be only roughly assessed. In this article, we propose Bayesian network (BN) and spatial analysis techniques to mine spatial data sets to evaluate the vulnerability of human beings. In our approach, spatial analysis is leveraged to preprocess the data; for example, kernel density analysis (KDA) and accumulative road cost surface modeling (ARCSM) are employed to quantify the influence of geofeatures on vulnerability and relate such influence to spatial distance. The knowledge- and data-based BN provides a consistent platform to integrate a variety of factors, including those extracted by KDA and ARCSM to model vulnerability uncertainty. We also consider the model's uncertainty and use the Bayesian model average and Occam's Window to average the multiple models obtained by our approach to robust prediction of the risk and vulnerability. We compare our approach with other probabilistic models in the case study of seismic risk and conclude that our approach is a good means to mining spatial data sets for evaluating vulnerability. © 2012 Society for Risk Analysis.

  17. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography

    International Nuclear Information System (INIS)

    Stawinski, G.

    1998-01-01

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  18. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  19. Partitioning in P-T concept

    International Nuclear Information System (INIS)

    Zhang Peilu; Qi Zhanshun; Zhu Zhixuan

    2000-01-01

    Comparison of dry- and water-method for partitioning fission products and minor actinides from the spent fuels, and description of advance of dry-method were done. Partitioning process, some typical concept and some results of dry-method were described. The problems fond in dry-method up to now were pointed out. The partitioning study program was suggested

  20. A pseudo-statistical approach to treat choice uncertainty: the example of partitioning allocation methods

    NARCIS (Netherlands)

    Mendoza Beltran, A.; Heijungs, R.; Guinée, J.; Tukker, A.

    2016-01-01

    Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes

  1. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  2. Stability analysis of a partitioned iterative method for steady free surface flow

    Science.gov (United States)

    Demeester, Toon; Degroote, Joris; Vierendeels, Jan

    2018-02-01

    This note considers the steady free surface (FS) flow problem as encountered in the paper by van Brummelen et al. [1]. In that paper, steady flow of water in a two-dimensional slice of an infinitely wide open channel with a particular bottom wall is calculated as the first step in the development of a 3D surface fitting method for steady flow around ships. In these water-air flows, the influence of air is usually negligible due to the large difference in density. Contrary to surface capturing methods which are typically multiphase techniques (such as the volume-of-fluid method), fitting methods usually consider only the water phase. The latter approach requires appropriate FS boundary conditions. The dynamic boundary condition (DBC) used here assumes that the pressure is constant (atmospheric) at the FS and the shear stresses are zero. The kinematic boundary condition (KBC) states that the FS is impermeable.

  3. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  4. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Directory of Open Access Journals (Sweden)

    Mohammad A. Alkhamis

    2016-02-01

    Full Text Available Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control and prevention resources. Bayesian phylodynamic models have recently been used to test research hypothesis related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV and, to the authors’ knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95 and revealed significant dispersal routes (Bayes factor > 6 of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results can’t be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic

  5. The evolution of autodigestion in the mushroom family Psathyrellaceae (Agaricales) inferred from Maximum Likelihood and Bayesian methods.

    Science.gov (United States)

    Nagy, László G; Urban, Alexander; Orstadius, Leif; Papp, Tamás; Larsson, Ellen; Vágvölgyi, Csaba

    2010-12-01

    Recently developed comparative phylogenetic methods offer a wide spectrum of applications in evolutionary biology, although it is generally accepted that their statistical properties are incompletely known. Here, we examine and compare the statistical power of the ML and Bayesian methods with regard to selection of best-fit models of fruiting-body evolution and hypothesis testing of ancestral states on a real-life data set of a physiological trait (autodigestion) in the family Psathyrellaceae. Our phylogenies are based on the first multigene data set generated for the family. Two different coding regimes (binary and multistate) and two data sets differing in taxon sampling density are examined. The Bayesian method outperformed Maximum Likelihood with regard to statistical power in all analyses. This is particularly evident if the signal in the data is weak, i.e. in cases when the ML approach does not provide support to choose among competing hypotheses. Results based on binary and multistate coding differed only modestly, although it was evident that multistate analyses were less conclusive in all cases. It seems that increased taxon sampling density has favourable effects on inference of ancestral states, while model parameters are influenced to a smaller extent. The model best fitting our data implies that the rate of losses of deliquescence equals zero, although model selection in ML does not provide proper support to reject three of the four candidate models. The results also support the hypothesis that non-deliquescence (lack of autodigestion) has been ancestral in Psathyrellaceae, and that deliquescent fruiting bodies represent the preferred state, having evolved independently several times during evolution. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. Schinus terebinthifolius countercurrent chromatography (Part III): Method transfer from small countercurrent chromatography column to preparative centrifugal partition chromatography ones as a part of method development.

    Science.gov (United States)

    das Neves Costa, Fernanda; Hubert, Jane; Borie, Nicolas; Kotland, Alexis; Hewitson, Peter; Ignatova, Svetlana; Renault, Jean-Hugues

    2017-03-03

    Countercurrent chromatography (CCC) and centrifugal partition chromatography (CPC) are support free liquid-liquid chromatography techniques sharing the same basic principles and features. Method transfer has previously been demonstrated for both techniques but never from one to another. This study aimed to show such a feasibility using fractionation of Schinus terebinthifolius berries dichloromethane extract as a case study. Heptane - ethyl acetate - methanol -water (6:1:6:1, v/v/v/v) was used as solvent system with masticadienonic and 3β-masticadienolic acids as target compounds. The optimized separation methodology previously described in Part I and II, was scaled up from an analytical hydrodynamic CCC column (17.4mL) to preparative hydrostatic CPC instruments (250mL and 303mL) as a part of method development. Flow-rate and sample loading were further optimized on CPC. Mobile phase linear velocity is suggested as a transfer invariant parameter if the CPC column contains sufficient number of partition cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Quantitative methods in ethnobotany and ethnopharmacology: considering the overall flora--hypothesis testing for over- and underused plant families with the Bayesian approach.

    Science.gov (United States)

    Weckerle, Caroline S; Cabras, Stefano; Castellanos, Maria Eugenia; Leonti, Marco

    2011-09-01

    We introduce and explain the advantages of the Bayesian approach and exemplify the method with an analysis of the medicinal flora of Campania, Italy. The Bayesian approach is a new method, which allows to compare medicinal floras with the overall flora of a given area and to investigate over- and underused plant families. In contrast to previously used methods (regression analysis and binomial method) it considers the inherent uncertainty around the analyzed data. The medicinal flora with 423 species was compiled based on nine studies on local medicinal plant use in Campania. The total flora comprises 2237 species belonging to 128 families. Statistical analysis was performed with the Bayesian method and the binomial method. An approximated χ(2)-test was used to analyze the relationship between use categories and higher taxonomic groups. Among the larger plant families we find the Lamiaceae, Rosaceae, and Malvaceae, to be overused in the local medicine of Campania and the Orchidaceae, Caryophyllaceae, Poaceae, and Fabaceae to be underused compared to the overall flora. Furthermore, do specific medicinal uses tend to be correlated with taxonomic plant groups. For example, are the Monocots heavily used for urological complaints. Testing for over- and underused taxonomic groups of a flora with the Bayesian method is easy to adopt and can readily be calculated in excel spreadsheets using the excel function Inverse beta (INV.BETA). In contrast to the binomial method the presented method is also suitable for small datasets. With larger datasets the two methods tend to converge. However, results are generally more conservative with the Bayesian method pointing out fewer families as over- or underused. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    Science.gov (United States)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  9. A composite experimental dynamic substructuring method based on partitioned algorithms and localized Lagrange multipliers

    Science.gov (United States)

    Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca

    2018-02-01

    Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.

  10. Using Bayesian network and AHP method as a marketing approach tools in defining tourists’ preferences

    Directory of Open Access Journals (Sweden)

    Nataša Papić-Blagojević

    2012-04-01

    Full Text Available Marketing approach is associated to market conditions and achieving long term profitability of a company by satisfying consumers’ needs. This approach in tourism does not have to be related only to promoting one touristic destination, but is associated to relation between travel agency and its clients too. It considers that travel agencies adjust their offers to their clients’ needs. In that sense, it is important to analyze the behavior of tourists in the earlier periods with consideration of their preferences. Using Bayesian network, it could be graphically displayed the connection between tourists who have similar taste and relationships between them. On the other hand, the analytic hierarchy process (AHP is used to rank tourist attractions, with also relying on past experience. In this paper we examine possible applications of these two models in tourism in Serbia. The example is hypothetical, but it will serve as a base for future research. Three types of tourism are chosen as a representative in Vojvodina: Cultural, Rural and Business tourism, because they are the bright spot of touristic development in this area. Applied on these forms, analytic hierarchy process has shown its strength in predicting tourists’ preferences.

  11. Overall performance optimization of a spiral pipe type heater by fluid- structure interaction modeling and partitioning screening method

    Directory of Open Access Journals (Sweden)

    Lei Guo

    2018-03-01

    Full Text Available A spiral pipe type heater is applied to the natural gas transportation system to inhibit gas hydrate, but fracture failure often happens at the joint of a coil pipe and a gathering pipe. To understand the mechanical behavior of the spiral pipe heater, a mechanical model of the coil pipe acted by the gas fluid is constructed, and the mechanical characteristics of the fracture point are obtained by numerical calculation. Then, the relation between angle parameters and the axial force, shear force, bending moment as well as stress of the structure is gotten. Comparison calculations of heat exchange before and after structural adjustment are done to get the optimized structure parameters of better mechanical properties and high heating rate. From this study, it is found that although the mechanical properties are improved, when increasing an angle parameter, the heat transfer performance is decreased. A coordination method is used for resolving the contradiction between heat transfer performance and mechanical properties to get an overall performance optimization. The provided partitioning screening method can improve the heating efficiency and mechanical properties of the heater obviously and conveniently.

  12. Partitioning sparse rectangular matrices for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, T.G.

    1998-05-01

    The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.

  13. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to

  14. A novel method for the determination of adsorption partition coefficients of minor gases in a shale sample by headspace gas chromatography.

    Science.gov (United States)

    Zhang, Chun-Yun; Hu, Hui-Chao; Chai, Xin-Sheng; Pan, Lei; Xiao, Xian-Ming

    2013-10-04

    A novel method has been developed for the determination of adsorption partition coefficient (Kd) of minor gases in shale. The method uses samples of two different sizes (masses) of the same material, from which the partition coefficient of the gas can be determined from two independent headspace gas chromatographic (HS-GC) measurements. The equilibrium for the model gas (ethane) was achieved in 5h at 120°C. The method also involves establishing an equation based on the Kd at higher equilibrium temperature, from which the Kd at lower temperature can be calculated. Although the HS-GC method requires some time and effort, it is simpler and quicker than the isothermal adsorption method that is in widespread use today. As a result, the method is simple and practical and can be a valuable tool for shale gas-related research and applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Sources of CO{sub 2} efflux from soil and review of partitioning methods

    Energy Technology Data Exchange (ETDEWEB)

    Kuzyakov, Y. [University of Hohenheim, Stuttgart (Germany). Institute of Soil Science and Land Evaluation

    2006-03-15

    Five main biogenic sources of CO{sub 2} efflux from soils have been distinguished and described according to their turnover rates and the mean residence time of carbon. They are root respiration, rhizomicrobial respiration, decomposition of plant residues, the priming effect induced by root exudation or by addition of plant residues, and basal respiration by microbial decomposition of soil organic matter (SOM). These sources can be grouped in several combinations to summarize CO{sub 2} efflux from the soil including: root-derived CO{sub 2}, plant-derived CO{sub 2}, SOM-derived CO{sub 2}, rhizosphere respiration, heterotrophic microbial respiration (respiration by heterotrophs), and respiration by autotrophs. These distinctions are important because without separation of SOM-derived CO{sub 2} from plant-derived CO{sub 2}, measurements of total soil respiration have very limited value for evaluation of the soil as a source or sink of atmospheric CO{sub 2} and for interpreting the sources of CO{sub 2} and the fate of carbon within soils and ecosystems. Additionally, the processes linked to the five sources of CO{sub 2} efflux from soil have various responses to environmental variables and consequently to global warming. This review describes the basic principles and assumptions of the following methods which allow SOM-derived and root-derived CO{sub 2} efflux to be separated under laboratory and field conditions: root exclusion techniques, shading and clipping, tree girdling, regression, component integration, excised roots and in situ root respiration; continuous and pulse labeling, {sup 13}C natural abundance and FACE, and radiocarbon dating and bomb-{sup 14}C. A short sections cover the separation of the respiration of autotrophs and that of heterotrophs, i.e. the separation of actual root respiration from microbial respiration, as well as methods allowing the amount of CO{sub 2} evolved by decomposition of plant residues and by priming effects to be estimated. All

  16. Strategies for Partitioning Clock Models in Phylogenomic Dating: Application to the Angiosperm Evolutionary Timescale.

    Science.gov (United States)

    Foster, Charles S P; Ho, Simon Y W

    2017-10-01

    Evolutionary timescales can be inferred from molecular sequence data using a Bayesian phylogenetic approach. In these methods, the molecular clock is often calibrated using fossil data. The uncertainty in these fossil calibrations is important because it determines the limiting posterior distribution for divergence-time estimates as the sequence length tends to infinity. Here, we investigate how the accuracy and precision of Bayesian divergence-time estimates improve with the increased clock-partitioning of genome-scale data into clock-subsets. We focus on a data set comprising plastome-scale sequences of 52 angiosperm taxa. There was little difference among the Bayesian date estimates whether we chose clock-subsets based on patterns of among-lineage rate heterogeneity or relative rates across genes, or by random assignment. Increasing the degree of clock-partitioning usually led to an improvement in the precision of divergence-time estimates, but this increase was asymptotic to a limit presumably imposed by fossil calibrations. Our clock-partitioning approaches yielded highly precise age estimates for several key nodes in the angiosperm phylogeny. For example, when partitioning the data into 20 clock-subsets based on patterns of among-lineage rate heterogeneity, we inferred crown angiosperms to have arisen 198-178 Ma. This demonstrates that judicious clock-partitioning can improve the precision of molecular dating based on phylogenomic data, but the meaning of this increased precision should be considered critically. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  17. Lifetime estimates of a fusion reactor first wall by linear damage summation and strain range partitioning methods

    International Nuclear Information System (INIS)

    Liu, K.C.; Grossbeck, M.L.

    1979-01-01

    A generalized model of a first wall made of 20% cold-worked steel was examined for neutron wall loadings ranging from 2 to 5 MW/m 2 . A spectrum of simplified on-off duty cycles was assumed with a 95% burn time. Independent evaluations of cyclic lifetimes were based on two methods: the method of linear damage summation currently being employed for use in ASME high-temperature design Code Case N-47 and that of strain range partitioning being studied for inclusion in the design code. An important point is that the latter method can incorporate a known decrease in ductility for materials subject to irradiation as a parameter, so low-cycle fatigue behavior can be estimated for irradiated material. Lifetimes predicted by the two methods agree reasonably well despite their diversity in concept. Lack of high-cycle fatigue data for the material tested at temperatures within the range of our interest precludes making conclusions on the accuracy of the predicted results, but such data are forthcoming. The analysis includes stress relaxation due to thermal and irradiation-induced creep. Reduced ductility values from irradiations that simulate the environment of the first wall of a fusion reactor were used to estimate the lifetime of the first wall under irradiation. These results indicate that 20% cold-worked type 316 stainless steel could be used as a first-wall material meeting a 8 to 10 MW-year/m 2 lifetime goal for a neutron wall loading of about 2 MW-year/m 2 and a maximum temperature of about 500 0 C

  18. Bayesian and Classical Machine Learning Methods: A Comparison for Tree Species Classification with LiDAR Waveform Signatures

    Directory of Open Access Journals (Sweden)

    Tan Zhou

    2017-12-01

    Full Text Available A plethora of information contained in full-waveform (FW Light Detection and Ranging (LiDAR data offers prospects for characterizing vegetation structures. This study aims to investigate the capacity of FW LiDAR data alone for tree species identification through the integration of waveform metrics with machine learning methods and Bayesian inference. Specifically, we first conducted automatic tree segmentation based on the waveform-based canopy height model (CHM using three approaches including TreeVaW, watershed algorithms and the combination of TreeVaW and watershed (TW algorithms. Subsequently, the Random forests (RF and Conditional inference forests (CF models were employed to identify important tree-level waveform metrics derived from three distinct sources, such as raw waveforms, composite waveforms, the waveform-based point cloud and the combined variables from these three sources. Further, we discriminated tree (gray pine, blue oak, interior live oak and shrub species through the RF, CF and Bayesian multinomial logistic regression (BMLR using important waveform metrics identified in this study. Results of the tree segmentation demonstrated that the TW algorithms outperformed other algorithms for delineating individual tree crowns. The CF model overcomes waveform metrics selection bias caused by the RF model which favors correlated metrics and enhances the accuracy of subsequent classification. We also found that composite waveforms are more informative than raw waveforms and waveform-based point cloud for characterizing tree species in our study area. Both classical machine learning methods (the RF and CF and the BMLR generated satisfactory average overall accuracy (74% for the RF, 77% for the CF and 81% for the BMLR and the BMLR slightly outperformed the other two methods. However, these three methods suffered from low individual classification accuracy for the blue oak which is prone to being misclassified as the interior live oak due

  19. Effect of partitioning the nonfiber carbohydrate fraction and neutral detergent fiber method on digestibility of carbohydrates by dairy cows.

    Science.gov (United States)

    Tebbe, A W; Faulkner, M J; Weiss, W P

    2017-08-01

    Many nutrition models rely on summative equations to estimate feed and diet energy concentrations. These models partition feed into nutrient fractions and multiply the fractions by their estimated true digestibility, and the digestible mass provided by each fraction is then summed and converted to an energy value. Nonfiber carbohydrate (NFC) is used in many models. Although it behaves as a nutritionally uniform fraction, it is a heterogeneous mixture of components. To reduce the heterogeneity, we partitioned NFC into starch and residual organic matter (ROM), which is calculated as 100 - CP - LCFA - ash - starch - NDF, where crude protein (CP), long-chain fatty acids (LCFA), ash, starch, and neutral detergent fiber (NDF) are a percentage of DM. However, the true digestibility of ROM is unknown, and because NDF is contaminated with both ash and CP, those components are subtracted twice. The effect of ash and CP contamination of NDF on in vivo digestibility of NDF and ROM was evaluated using data from 2 total-collection digestibility experiments using lactating dairy cows. Digestibility of NDF was greater when it was corrected for ash and CP than without correction. Conversely, ROM apparent digestibility decreased when NDF was corrected for contamination. Although correcting for contamination statistically increased NDF digestibility, the effect was small; the average increase was 3.4%. The decrease in ROM digestibility was 7.4%. True digestibility of ROM is needed to incorporate ROM into summative equations. Data from multiple digestibility experiments (38 diets) using dairy cows were collated, and ROM concentrations were regressed on concentration of digestible ROM (ROM was calculated without adjusting for ash and CP contamination). The estimated true digestibility coefficient of ROM was 0.96 (SE = 0.021), and metabolic fecal ROM was 3.43 g/100 g of dry matter intake (SE = 0.30). Using a smaller data set (7 diets), estimated true digestibility of ROM when calculated

  20. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    Science.gov (United States)

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; Ulrich, Craig; Peterson, John E.; Gusmeroli, Alessio; Hubbard, Susan S.

    2017-04-01

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE = 2.9 cm), with a spatial sampling of 10 cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE = 6.0 cm) and a fine spatial sampling (4 cm × 4 cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE = 6.0 cm), at 0.5 m resolution and over the lidar domain (750 m × 700 m).

  1. Bayesian analysis in plant pathology.

    Science.gov (United States)

    Mila, A L; Carriquiry, A L

    2004-09-01

    ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.

  2. Gentile statistics and restricted partitions

    Indian Academy of Sciences (India)

    We generalise these results to obtain an asymptotic formula for the restricted or coloured partitions p k s ( n ) , which is the number of partitions of an integer into the summand of th powers of integers such that each power of a given integer may occur utmost times. While the method is not rigorous, it reproduces the ...

  3. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  4. Evaluation of non-animal methods for assessing skin sensitisation hazard: A Bayesian Value-of-Information analysis.

    Science.gov (United States)

    Leontaridou, Maria; Gabbert, Silke; Van Ierland, Ekko C; Worth, Andrew P; Landsiedel, Robert

    2016-07-01

    This paper offers a Bayesian Value-of-Information (VOI) analysis for guiding the development of non-animal testing strategies, balancing information gains from testing with the expected social gains and costs from the adoption of regulatory decisions. Testing is assumed to have value, if, and only if, the information revealed from testing triggers a welfare-improving decision on the use (or non-use) of a substance. As an illustration, our VOI model is applied to a set of five individual non-animal prediction methods used for skin sensitisation hazard assessment, seven battery combinations of these methods, and 236 sequential 2-test and 3-test strategies. Their expected values are quantified and compared to the expected value of the local lymph node assay (LLNA) as the animal method. We find that battery and sequential combinations of non-animal prediction methods reveal a significantly higher expected value than the LLNA. This holds for the entire range of prior beliefs. Furthermore, our results illustrate that the testing strategy with the highest expected value does not necessarily have to follow the order of key events in the sensitisation adverse outcome pathway (AOP). 2016 FRAME.

  5. Supplementary Material for: DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning

    KAUST Repository

    Soufan, Othman

    2016-01-01

    Abstract Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) technique for modeling correlations between several HTS assays, meaning that a single prediction represents a subset of assigned correlated labels instead of one label. Thus, the devised method provides an increased probability for more accurate predictions of compounds that were not tested in particular assays. Results Here we present DRABAL, a novel MLC solution that incorporates structure learning of a Bayesian network as a step to model dependency between the HTS assays. In this study, DRABAL was used to process more than 1.4 million interactions of over 400,000 compounds and analyze the existing relationships between five large HTS assays from the PubChem BioAssay Database. Compared to different MLC methods, DRABAL significantly improves the F1Score by about 22%, on average. We further illustrated usefulness and utility of DRABAL through screening FDA approved drugs and reported ones that have a high probability to interact with several targets, thus enabling drug-multi-target repositioning. Specifically DRABAL suggests the Thiabendazole drug as a common activator of the NCP1 and Rab-9A proteins, both of which are designed to identify treatment modalities for the Niemannâ Pick type C disease. Conclusion We developed a novel MLC solution based on a Bayesian active learning framework to overcome the challenge of lacking fully labeled training data and exploit actual dependencies between the HTS assays. The solution is motivated by the need to model dependencies between

  6. Evidence Estimation for Bayesian Partially Observed MRFs

    NARCIS (Netherlands)

    Chen, Y.; Welling, M.

    2013-01-01

    Bayesian estimation in Markov random fields is very hard due to the intractability of the partition function. The introduction of hidden units makes the situation even worse due to the presence of potentially very many modes in the posterior distribution. For the first time we propose a

  7. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    Energy Technology Data Exchange (ETDEWEB)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  8. Estimation of Mental Disorders Prevalence in High School Students Using Small Area Methods: A Hierarchical Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Ali Reza Soltanian

    2016-08-01

    Full Text Available Background Adolescence is one of the most important periods in the course of human evolution and the prevalence of mental disorders among adolescence in different regions of Iran, especially in southern Iran. Objectives This study was conducted to determine the prevalence of mental disorders among high school students in Bushehr province, south of Iran. Methods In this cross-sectional study, 286 high school students were recruited by a multi-stage random sampling in Bushehr province in 2015. A general health questionnaire (GHQ-28 was used to assess mental disorders. The small area method, under the hierarchical Bayesian approach, was used to determine the prevalence of mental disorders and data analysis. Results From 286 questionnaires only 182 were completely filed and evaluated (the response rate was 70.5%. Of the students, 58.79% and 41.21% were male and female, respectively. Of all students, the prevalence of mental disorders in Bushehr, Dayyer, Deylam, Kangan, Dashtestan, Tangestan, Genaveh, and Dashty were 0.48, 0.42, 0.45, 0.52, 0.41, 0.47, 0.42, and 0.43, respectively. Conclusions Based on this study, the prevalence of mental disorders among adolescents was increasing in Bushehr Province counties. The lack of a national policy in this way is a serious obstacle to mental health and wellbeing access.

  9. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...

  10. An evolutionary game theoretical model shows the limitations of the additive partitioning method for interpreting biodiversity experiments

    NARCIS (Netherlands)

    Vermeulen, Peter J.; Ruijven, van Jasper; Anten, Niels P.R.; Werf, van der Wopke; Satake, Akiko

    2017-01-01

    1.The relationship between diversity and ecosystem functioning is often analysed by partitioning the change in species performance in mixtures into a complementarity effect (CE) and a selection effect (SE). There is continuing ambiguity in the literature on the interpretation of these effects,

  11. A matrix free, partitioned solution of fluid-structure interaction problems using finite volume and finite element methods

    CSIR Research Space (South Africa)

    Suliman, Ridhwaan

    2015-01-01

    Full Text Available A fully-coupled partitioned finite volume–finite volume and hybrid finite volume–finite element fluid-structure interaction scheme is presented. The fluid domain is modelled as a viscous incompressible isothermal region governed by the Navier...

  12. Comment on "Bayesian evidence: can we beat MultiNest using traditional MCMC methods", by Rutger van Haasteren (arXiv:0911.2150)

    OpenAIRE

    Feroz, F.; Hobson, M. P.; Trotta, R.

    2010-01-01

    In arXiv:0911.2150, Rutger van Haasteren seeks to criticize the nested sampling algorithm for Bayesian data analysis in general and its MultiNest implementation in particular. He introduces a new method for evidence evaluation based on the idea of Voronoi tessellation and requiring samples from the posterior distribution obtained through MCMC based methods. He compares its accuracy and efficiency with MultiNest, concluding that it outperforms MultiNest in several cases. This comparison is com...

  13. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  14. Bayesian Exploratory Factor Analysis

    Science.gov (United States)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  15. How to combine correlated data sets-A Bayesian hyperparameter matrix method

    Science.gov (United States)

    Ma, Y.-Z.; Berndsen, A.

    2014-07-01

    We construct a ;hyperparameter matrix; statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by Lahav et al. (2000) and Hobson et al. (2002) which was designed to combine independent data sets. The advantage of our method is to treat correlations between multiple data sets and gives appropriate relevant weights of multiple data sets with mutual correlations. We define a new ;element-wise; product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying it to a demonstrative toy model of fitting a straight line to two sets of data. We show that the hyperparameter matrix method can detect unaccounted systematic errors or underestimated errors in the data sets. Additionally, the ratio of Bayes' factors provides a distinct indicator of the necessity of including hyperparameters. Our example shows that the likelihood we construct for joint analyses of correlated data sets can be widely applied to many astrophysical systems.

  16. Novel medium-throughput technique for investigating drug-cyclodextrin complexation by pH-metric titration using the partition coefficient method.

    Science.gov (United States)

    Dargó, Gergő; Boros, Krisztina; Péter, László; Malanga, Milo; Sohajda, Tamás; Szente, Lajos; Balogh, György T

    2018-05-05

    The present study was aimed to develop a medium-throughput screening technique for investigation of cyclodextrin (CD)-active pharmaceutical ingredient (API) complexes. Dual-phase potentiometric lipophilicity measurement, as gold standard technique, was combined with the partition coefficient method (plotting the reciprocal of partition coefficients of APIs as a function of CD concentration). A general equation was derived for determination of stability constants of 1:1 CD-API complexes (K 1:1,CD ) based on solely the changes of partition coefficients (logP o/w N -logP app N ), without measurement of the actual API concentrations. Experimentally determined logP value (-1.64) of 6-deoxy-6[(5/6)-fluoresceinylthioureido]-HPBCD (FITC-NH-HPBCD) was used to estimate the logP value (≈ -2.5 to -3) of (2-hydroxypropyl)-ß-cyclodextrin (HPBCD). The results suggested that the amount of HPBCD can be considered to be inconsequential in the octanol phase. The decrease of octanol volume due to the octanol-CD complexation was considered, thus a corrected octanol-water phase ratio was also introduced. The K 1:1,CD values obtained by this developed method showed a good accordance with the results from other orthogonal methods. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Using Bayesian optimization method and FLEXPART tracer model to evaluate CO emission in East China in springtime.

    Science.gov (United States)

    Pan, X L; Kanaya, Y; Wang, Z F; Tang, X; Takigawa, M; Pakpong, P; Taketani, F; Akimoto, H

    2014-03-01

    Carbon monoxide (CO) is of great interest as a restriction factor for pollutants related to incomplete combustions. This study attempted to evaluate CO emission in East China using the analytical Bayesian inverse method and observations at Mount Hua in springtime. The mixing ratio of CO at the receptor was calculated using 5-day source-receptor relationship (SRR) simulated by a Lagrangian Particle Dispersion Model (FLEXPART) and CO emission flux. The stability of the inversion solution was evaluated on the basis of repeated random sampling simulations. The inversion results demonstrated that there were two city cluster regions (the Beijing-Tianjin-Hebei region and the low reaches of the Yangtze River Delta) where the difference between a priori (Intercontinental Chemical Transport Experiment-Phase B, INTEX-B) and a posteriori was statistically significant and the a priori might underestimate the CO emission flux by 37 %. A correction factor (a posteriori/a priori) of 1.26 was suggested for CO emission in China in spring. The spatial distribution and magnitude of the CO emission flux were comparable to the latest regional emission inventory in Asia (REAS2.0). Nevertheless, further evaluation is still necessary in view of the larger uncertainties for both the analytical inversion and the bottom-up statistical approaches.

  18. Orbits for the Impatient: A Bayesian Rejection-sampling Method for Quickly Fitting the Orbits of Long-period Exoplanets

    Science.gov (United States)

    Blunt, Sarah; Nielsen, Eric L.; De Rosa, Robert J.; Konopacky, Quinn M.; Ryan, Dominic; Wang, Jason J.; Pueyo, Laurent; Rameau, Julien; Marois, Christian; Marchis, Franck; Macintosh, Bruce; Graham, James R.; Duchêne, Gaspard; Schneider, Adam C.

    2017-05-01

    We describe a Bayesian rejection-sampling algorithm designed to efficiently compute posterior distributions of orbital elements for data covering short fractions of long-period exoplanet orbits. Our implementation of this method, Orbits for the Impatient (OFTI), converges up to several orders of magnitude faster than two implementations of Markov Chain Monte Carlo (MCMC) in this regime. We illustrate the efficiency of our approach by showing that OFTI calculates accurate posteriors for all existing astrometry of the exoplanet 51 Eri b up to 100 times faster than a Metropolis-Hastings MCMC. We demonstrate the accuracy of OFTI by comparing our results for several orbiting systems with those of various MCMC implementations, finding the output posteriors to be identical within shot noise. We also describe how our algorithm was used to successfully predict the location of 51 Eri b six months in the future based on less than three months of astrometry. Finally, we apply OFTI to 10 long-period exoplanets and brown dwarfs, all but one of which have been monitored over less than 3% of their orbits, producing fits to their orbits from astrometric records in the literature.

  19. Manifold Partition Discriminant Analysis.

    Science.gov (United States)

    Yang Zhou; Shiliang Sun

    2017-04-01

    We propose a novel algorithm for supervised dimensionality reduction named manifold partition discriminant analysis (MPDA). It aims to find a linear embedding space where the within-class similarity is achieved along the direction that is consistent with the local variation of the data manifold, while nearby data belonging to different classes are well separated. By partitioning the data manifold into a number of linear subspaces and utilizing the first-order Taylor expansion, MPDA explicitly parameterizes the connections of tangent spaces and represents the data manifold in a piecewise manner. While graph Laplacian methods capture only the pairwise interaction between data points, our method captures both pairwise and higher order interactions (using regional consistency) between data points. This manifold representation can help to improve the measure of within-class similarity, which further leads to improved performance of dimensionality reduction. Experimental results on multiple real-world data sets demonstrate the effectiveness of the proposed method.

  20. Comparison of an acetonitrile extraction/partitioning and “dispersive solid-phase extraction” method with classical multi-residue methods for the extraction of herbicide residues in barley samples

    NARCIS (Netherlands)

    Diez, C.; Traag, W.A.; Zommer, P.; Marinero, P.; Atienza, J.

    2006-01-01

    An acetonitrile/partitioning extraction and "dispersive solid-phase extraction (SPE)" method that provides high quality results with a minimum number of steps and a low solvent and glassware consumption was published in 2003. This method, suitable for the analysis of multiple classes of pesticide

  1. Bayesian inference with ecological applications

    CERN Document Server

    Link, William A

    2009-01-01

    This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...

  2. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    Science.gov (United States)

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite

  3. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  4. Bayesian inference for data assimilation using least-squares finite element methods

    NARCIS (Netherlands)

    Dwight, R.P.

    2010-01-01

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier Stokes ow [1]. The approach was shown to be effective without

  5. Correction the Bias of Odds Ratio resulting from the Misclassification of Exposures in the Study of Environmental Risk Factors of Lung Cancer using Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Alireza Abadi

    2015-07-01

    Full Text Available Background & Objective: Inability to measure exact exposure in epidemiological studies is a common problem in many studies, especially cross-sectional studies. Depending on the extent of misclassification, results may be affected. Existing methods for solving this problem require a lot of time and money and it is not practical for some of the exposures. Recently, new methods have been proposed in 1:1 matched case–control studies that have solved these problems to some extent. In the present study we have aimed to extend the existing Bayesian method to adjust for misclassification in matched case–control Studies with 1:2 matching. Methods: Here, the standard Dirichlet prior distribution for a multinomial model was extended to allow the data of exposure–disease (OR parameter to be imported into the model excluding other parameters. Information that exist in literature about association between exposure and disease were used as prior information about OR. In order to correct the misclassification Sensitivity Analysis was accomplished and the results were obtained under three Bayesian Methods. Results: The results of naïve Bayesian model were similar to the classic model. The second Bayesian model by employing prior information about the OR, was heavily affected by these information. The third proposed model provides maximum bias adjustment for the risk of heavy metals, smoking and drug abuse. This model showed that heavy metals are not an important risk factor although raw model (logistic regression Classic detected this exposure as an influencing factor on the incidence of lung cancer. Sensitivity analysis showed that third model is robust regarding to different levels of Sensitivity and Specificity. Conclusion: The present study showed that although in most of exposures the results of the second and third model were similar but the proposed model would be able to correct the misclassification to some extent.

  6. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... an overall estimate of the causal relationship between the phenotype and the outcome, and an assessment of its heterogeneity across studies. As an example, we estimate the causal relationship of blood concentrations of C-reactive protein on fibrinogen levels using data from 11 studies. These methods provide...... a flexible framework for efficient estimation of causal relationships derived from multiple studies. Issues discussed include weak instrument bias, analysis of binary outcome data such as disease risk, missing genetic data, and the use of haplotypes....

  7. Nonlinear tracking in a diffusion process with a Bayesian filter and the finite element method

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Thygesen, Uffe Høgsbro; Madsen, Henrik

    2011-01-01

    become complicated using SMC because Monte Carlo randomness is introduced. The finite element (FE) method solves the Kolmogorov equations of the SDE numerically on a triangular unstructured mesh for which boundary conditions to the state-space are simple to incorporate. The FE approach to nonlinear state...... estimation is suited for off-line data analysis because the computed smoothed state densities, maximum a posteriori parameter estimates and state sequence are deterministic conditional on the finite element mesh and the observations. The proposed method is conceptually similar to existing point......A new approach to nonlinear state estimation and object tracking from indirect observations of a continuous time process is examined. Stochastic differential equations (SDEs) are employed to model the dynamics of the unobservable state. Tracking problems in the plane subject to boundaries...

  8. Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods

    Directory of Open Access Journals (Sweden)

    Heide Lukosch

    2018-03-01

    Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation.  The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.

  9. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods

    Directory of Open Access Journals (Sweden)

    Bakos Jason D

    2010-04-01

    Full Text Available Abstract Background Likelihood (ML-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. Results We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10× speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Conclusions Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs 1.

  10. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods.

    Science.gov (United States)

    Zierke, Stephanie; Bakos, Jason D

    2010-04-12

    Likelihood (ML)-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF) is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA)-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10x speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs).

  11. A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics

    Science.gov (United States)

    Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.

    2017-03-01

    Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

  12. A New Bayesian Method to Identify the Environmental Factors That Influence Recent Migration

    Science.gov (United States)

    Faubet, Pierre; Gaggiotti, Oscar E.

    2008-01-01

    We present a new multilocus genotype method that makes inferences about recent immigration rates and identifies the environmental factors that are more likely to explain observed gene flow patterns. It also estimates population-specific inbreeding coefficients, allele frequencies, and local population FST's and performs individual assignments. We generate synthetic data sets to determine the region of the parameter space where our method is and is not able to provide accurate estimates. Our simulation study indicates that reliable results can be obtained when the global level of genetic differentiation (FST) is >1%, the number of loci is only 10, and sample sizes are of the order of 50 individuals per population. We illustrate our method by applying it to Pakistani human data, considering altitude and geographic distance as explanatory factors. Our results suggest that altitude explains better the genetic data than geographic distance. Additionally, they show that southern low-altitude populations have higher migration rates than northern high-altitude ones. PMID:18245344

  13. The state of the art of partitioning technology for long-lived actinides and fission products by solvent extraction method

    International Nuclear Information System (INIS)

    Ozawa, M.; Koma, Y.; Nomura, K.; Sano, Y.

    1998-04-01

    Japan launched an ambitious long-term program on partitioning and transmutation (P-T) called OMEGA in 1988. Under the program PNC has being carried out its R and D activities. A check and review process based on progress made was conducted in fall 1998 by STA (Science and Technology Agency). This report was prepared to submit the state of R and D activities on partitioning by solvent extraction program in PNC for seven years (1990-1997) to STA. The paper described the progress, the results and future plans on (a) improved PUREX process for the extraction of Np with Pu by valence control, (b) improved TRUEX process for the extraction of minor Actinides and (c) other potential solvents for the extraction of other long-lived FPs from spent fuels. (H. Itami)

  14. Bayesian methods for jointly estimating genomic breeding values of one continuous and one threshold trait.

    Directory of Open Access Journals (Sweden)

    Chonglong Wang

    Full Text Available Genomic selection has become a useful tool for animal and plant breeding. Currently, genomic evaluation is usually carried out using a single-trait model. However, a multi-trait model has the advantage of using information on the correlated traits, leading to more accurate genomic prediction. To date, joint genomic prediction for a continuous and a threshold trait using a multi-trait model is scarce and needs more attention. Based on the previously proposed methods BayesCπ for single continuous trait and BayesTCπ for single threshold trait, we developed a novel method based on a linear-threshold model, i.e., LT-BayesCπ, for joint genomic prediction of a continuous trait and a threshold trait. Computing procedures of LT-BayesCπ using Markov Chain Monte Carlo algorithm were derived. A simulation study was performed to investigate the advantages of LT-BayesCπ over BayesCπ and BayesTCπ with regard to the accuracy of genomic prediction on both traits. Factors affecting the performance of LT-BayesCπ were addressed. The results showed that, in all scenarios, the accuracy of genomic prediction obtained from LT-BayesCπ was significantly increased for the threshold trait compared to that from single trait prediction using BayesTCπ, while the accuracy for the continuous trait was comparable with that from single trait prediction using BayesCπ. The proposed LT-BayesCπ could be a method of choice for joint genomic prediction of one continuous and one threshold trait.

  15. A method for risk-informed safety significance categorization using the analytic hierarchy process and bayesian belief networks

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2004-01-01

    A risk-informed safety significance categorization (RISSC) is to categorize structures, systems, or components (SSCs) of a nuclear power plant (NPP) into two or more groups, according to their safety significance using both probabilistic and deterministic insights. In the conventional methods for the RISSC, the SSCs are quantitatively categorized according to their importance measures for the initial categorization. The final decisions (categorizations) of SSCs, however, are qualitatively made by an expert panel through discussions and adjustments of opinions by using the probabilistic insights compiled in the initial categorization process and combining the probabilistic insights with the deterministic insights. Therefore, owing to the qualitative and linear decision-making process, the conventional methods have the demerits as follows: (1) they are very costly in terms of time and labor, (2) it is not easy to reach the final decision, when the opinions of the experts are in conflict and (3) they have an overlapping process due to the linear paradigm (the categorization is performed twice - first, by the engineers who propose the method, and second, by the expert panel). In this work, a method for RISSC using the analytic hierarchy process (AHP) and bayesian belief networks (BBN) is proposed to overcome the demerits of the conventional methods and to effectively arrive at a final decision (or categorization). By using the AHP and BBN, the expert panel takes part in the early stage of the categorization (that is, the quantification process) and the safety significance based on both probabilistic and deterministic insights is quantified. According to that safety significance, SSCs are quantitatively categorized into three categories such as high safety significant category (Hi), potentially safety significant category (Po), or low safety significant category (Lo). The proposed method was applied to the components such as CC-V073, CV-V530, and SI-V644 in Ulchin Unit

  16. A Bayesian method for identifying missing enzymes in predicted metabolic pathway databases

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2004-06-01

    Full Text Available Abstract Background The PathoLogic program constructs Pathway/Genome databases by using a genome's annotation to predict the set of metabolic pathways present in an organism. PathoLogic determines the set of reactions composing those pathways from the enzymes annotated in the organism's genome. Most annotation efforts fail to assign function to 40–60% of sequences. In addition, large numbers of sequences may have non-specific annotations (e.g., thiolase family protein. Pathway holes occur when a genome appears to lack the enzymes needed to catalyze reactions in a pathway. If a protein has not been assigned a specific function during the annotation process, any reaction catalyzed by that protein will appear as a missing enzyme or pathway hole in a Pathway/Genome database. Results We have developed a method that efficiently combines homology and pathway-based evidence to identify candidates for filling pathway holes in Pathway/Genome databases. Our program not only identifies potential candidate sequences for pathway holes, but combines data from multiple, heterogeneous sources to assess the likelihood that a candidate has the required function. Our algorithm emulates the manual sequence annotation process, considering not only evidence from homology searches, but also considering evidence from genomic context (i.e., is the gene part of an operon? and functional context (e.g., are there functionally-related genes nearby in the genome? to determine the posterior belief that a candidate has the required function. The method can be applied across an entire metabolic pathway network and is generally applicable to any pathway database. The program uses a set of sequences encoding the required activity in other genomes to identify candidate proteins in the genome of interest, and then evaluates each candidate by using a simple Bayes classifier to determine the probability that the candidate has the desired function. We achieved 71% precision at a

  17. Testing the quality of nonadult Bayesian dental age assessment methods to juvenile skeletal remains: the Lisbon collection children and secular trend effects.

    Science.gov (United States)

    Heuzé, Yann; Cardoso, Hugo F V

    2008-03-01

    Age estimation of nonadult skeletons from archaeological or forensic contexts has relied heavily on modern schedules of dental formation developed on samples of children of affluent populations. Although genetic factors have been considered to have had the greatest influence on population differences in dental development, increased interest has been placed on the role of environmental influences, such as differences in socioeconomic status and secular trends. This study evaluates the quality (i.e., accuracy and reliability) of two Bayesian dental age estimation methods to a sample of identified child skeletons from the Lisbon collection (20th century Portugal). The two Bayesian methods are developed on a reference sample of modern children from France, Ivory Coast, Iran, and Morocco. The test sample from Lisbon, compared to the reference sample, is separated by over 50 years of secular trends and comprises a lower socioeconomic segment. The two Bayesian methods show that the Lisbon children are consistently 1-year behind in dental age compared to the modern children of the reference sample. Environmental factors largely explain the differences between dental and chronological age in historic samples of nonadults. 2007 Wiley-Liss, Inc.

  18. Application of Bayesian Method in Validation of TTM Decisional Balance and Self-Efficacy Constructs to Improve Nutritional Behavior in Yazdian Prediabetes

    Directory of Open Access Journals (Sweden)

    Hossein Fallahzadeh

    2017-07-01

    Full Text Available Introduction: To introduce Bayesian method in validation of transtheoretical model’s Self-Efficacy and Decisional Balance for nutritional behavior improvement among Prediabetes with ordinal data. Methods: This is an Experimental trial with parallel design and sample was included 220 Prediabetes who Participated in screening program and had over 30 years old, fasting blood glucose ranged 100-125 and at least elementary Education. We used OpenBugs 3.2.3 to fit Bayesian ordinal factor analysis to achieve validation of TTM’s decisional balance and self-efficacy. Results: All of the factor loadings corresponded to mentioned constructs was significant at α= 0.05%. That support validation of the Constructs. Correlation between Pros and Cons was not significant(-0.076, 0.007.Furthermore a specific statistical model for ordinal data created that can estimate odds ratios and marginal Probabilities for each choice of any item in questionnaire. Conclusion: Thanks to benefits of Bayesian method in use of prior information such as Meta-analysis and other resources, In comparison to similar studies that used standard or other factor analysis for ordinal data, our results had good accuracy(with aspect to standard deviation even with lower sample size.so the results can be used  in future clinical researches.

  19. Evapotranspiration partitioning for three agro-ecosystems with contrasting moisture conditions: a comparison of an isotope method and a two-source model calculation

    Science.gov (United States)

    Wei, Z.; Lee, X.; Wen, X.; Xiao, W.

    2017-12-01

    Quantification of the contribution of transpiration (T) to evapotranspiration (ET) is a requirement for understanding changes in carbon assimilation and water cycling in a changing environment. So far, few studies have examined seasonal variability of T/ET and compared different ET partitioning methods under natural conditions across diverse agro-ecosystems. In this study, we apply a two-source model to partition ET for three agro-ecosystems (rice, wheat and corn). The model-estimated T/ET ranges from 0 to 1, with a near continuous increase over time in the early growing season when leaf area index (LAI) is less than 2.5 and then convergence towards a stable value beyond LAI of 2.5. The seasonal change in T/ET can be described well as a function of LAI, implying that LAI is a first-order factor affecting ET partitioning. The two-source model results show that the growing-season (May - September for rice, April - June for wheat and June to September for corn) T/ET is 0.50, 0.84 and 0.64, while an isotopic approach shows that T/ET is 0.74, 0.93 and 0.81 for rice, wheat and maize, respectively. The two-source model results are supported by soil lysimeter and eddy covariance measurements made during the same time period for wheat (0.87). Uncertainty analysis suggests that further improvements to the Craig-Gordon model prediction of the evaporation isotope composition and to measurement of the isotopic composition of ET are necessary to achieve accurate flux partitioning at the ecosystem scale using water isotopes as tracers.

  20. A Bayesian method to rank different model forecasts of the same volcanic ash cloud: Chapter 24

    Science.gov (United States)

    Denlinger, Roger P.; Webley, P.; Mastin, Larry G.; Schwaiger, Hans F.

    2012-01-01

    Volcanic eruptions often spew fine ash high into the atmosphere, where it is carried downwind, forming long ash clouds that disrupt air traffic and pose a hazard to air travel. To mitigate such hazards, the community studying ash hazards must assess risk of ash ingestion for any flight path and provide robust and accurate forecasts of volcanic ash dispersal. We provide a quantitative and objective method to evaluate the efficacy of ash dispersal estimates from different models, using Bayes theorem to assess the predictions that each model makes about ash dispersal. We incorporate model and measurement uncertainty and produce a posterior probability for model input parameters. The integral of the posterior over all possible combinations of model inputs determines the evidence for each model and is used to compare models. We compare two different types of transport models, an Eulerian model (Ash3d) and a Langrangian model (PUFF), as applied to the 2010 eruptions of Eyjafjallajökull volcano in Iceland. The evidence for each model benefits from common physical characteristics of ash dispersal from an eruption column and provides a measure of how well each model forecasts cloud transport. Given the complexity of the wind fields, we find that the differences between these models depend upon the differences in the way the models disperse ash into the wind from the source plume. With continued observation, the accuracy of the estimates made by each model increases, increasing the efficacy of each model’s ability to simulate ash dispersal.

  1. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  2. Parallelizing MCMC with Random Partition Trees

    OpenAIRE

    Wang, Xiangyu; Guo, Fangjian; Heller, Katherine A.; Dunson, David B.

    2015-01-01

    The modern scale of data has brought new challenges to Bayesian inference. In particular, conventional MCMC algorithms are computationally very expensive for large data sets. A promising approach to solve this problem is embarrassingly parallel MCMC (EP-MCMC), which first partitions the data into multiple subsets and runs independent sampling algorithms on each subset. The subset posterior draws are then aggregated via some combining rules to obtain the final approximation. Existing EP-MCMC a...

  3. Strength Reduction Method for Stability Analysis of Local Discontinuous Rock Mass with Iterative Method of Partitioned Finite Element and Interface Boundary Element

    Directory of Open Access Journals (Sweden)

    Tongchun Li

    2015-01-01

    element is proposed to solve the safety factor of local discontinuous rock mass. Slope system is divided into several continuous bodies and local discontinuous interface boundaries. Each block is treated as a partition of the system and contacted by discontinuous joints. The displacements of blocks are chosen as basic variables and the rigid displacements in the centroid of blocks are chosen as motion variables. The contact forces on interface boundaries and the rigid displacements to the centroid of each body are chosen as mixed variables and solved iteratively using the interface boundary equations. Flexibility matrix is formed through PFE according to the contact states of nodal pairs and spring flexibility is used to reflect the influence of weak structural plane so that nonlinear iteration is only limited to the possible contact region. With cohesion and friction coefficient reduced gradually, the states of all nodal pairs at the open or slip state for the first time are regarded as failure criterion, which can decrease the effect of subjectivity in determining safety factor. Examples are used to verify the validity of the proposed method.

  4. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  5. Metal-silicate Partitioning at High Pressure and Temperature: Experimental Methods and a Protocol to Suppress Highly Siderophile Element Inclusions.

    Science.gov (United States)

    Bennett, Neil R; Brenan, James M; Fei, Yingwei

    2015-06-13

    Estimates of the primitive upper mantle (PUM) composition reveal a depletion in many of the siderophile (iron-loving) elements, thought to result from their extraction to the core during terrestrial accretion. Experiments to investigate the partitioning of these elements between metal and silicate melts suggest that the PUM composition is best matched if metal-silicate equilibrium occurred at high pressures and temperatures, in a deep magma ocean environment. The behavior of the most highly siderophile elements (HSEs) during this process however, has remained enigmatic. Silicate run-products from HSE solubility experiments are commonly contaminated by dispersed metal inclusions that hinder the measurement of element concentrations in the melt. The resulting uncertainty over the true solubility and metal-silicate partitioning of these elements has made it difficult to predict their expected depletion in PUM. Recently, several studies have employed changes to the experimental design used for high pressure and temperature solubility experiments in order to suppress the formation of metal inclusions. The addition of Au (Re, Os, Ir, Ru experiments) or elemental Si (Pt experiments) to the sample acts to alter either the geometry or rate of sample reduction respectively, in order to avoid transient metal oversaturation of the silicate melt. This contribution outlines procedures for using the piston-cylinder and multi-anvil apparatus to conduct solubility and metal-silicate partitioning experiments respectively. A protocol is also described for the synthesis of uncontaminated run-products from HSE solubility experiments in which the oxygen fugacity is similar to that during terrestrial core-formation. Time-resolved LA-ICP-MS spectra are presented as evidence for the absence of metal-inclusions in run-products from earlier studies, and also confirm that the technique may be extended to investigate Ru. Examples are also given of how these data may be applied.

  6. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  7. Forest Evapotranspiration and Energy Flux Partitioning Based on Eddy Covariance Methods in an Arid Desert Region of Northwest China

    OpenAIRE

    Ma, Xiaohong; Feng, Qi; Su, Yonghong; Yu, Tengfei; Jin, Hua

    2017-01-01

    In this study, the characteristics of energy flux partitioning and evapotranspiration of P. euphratica forests were examined in the extreme arid region of Northwest China. Energy balance closure of the ecosystem was approximately 72% (H + LE = 0.72 ∗ (Rn-G)+7.72, r2=0.79, n=12095), where Rn is the net radiation, G is the soil heat flux, H is the sensible heat flux, and LE is the latent heat flux. LE was the main term of energy consumption at annual time scale because of higher value in the gr...

  8. Bayesian coronal seismology

    Science.gov (United States)

    Arregui, Iñigo

    2018-01-01

    In contrast to the situation in a laboratory, the study of the solar atmosphere has to be pursued without direct access to the physical conditions of interest. Information is therefore incomplete and uncertain and inference methods need to be employed to diagnose the physical conditions and processes. One of such methods, solar atmospheric seismology, makes use of observed and theoretically predicted properties of waves to infer plasma and magnetic field properties. A recent development in solar atmospheric seismology consists in the use of inversion and model comparison methods based on Bayesian analysis. In this paper, the philosophy and methodology of Bayesian analysis are first explained. Then, we provide an account of what has been achieved so far from the application of these techniques to solar atmospheric seismology and a prospect of possible future extensions.

  9. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    .... It also incorporates BayesX code, which is particularly useful in nonlinear regression. To demonstrate MCMC sampling from first principles, the author includes worked examples using the R package...

  10. Bayesian Inference in Polling Technique: 1992 Presidential Polls.

    Science.gov (United States)

    Satake, Eiki

    1994-01-01

    Explores the potential utility of Bayesian statistical methods in determining the predictability of multiple polls. Compares Bayesian techniques to the classical statistical method employed by pollsters. Considers these questions in the context of the 1992 presidential elections. (HB)

  11. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....

  12. Iterative Bayesian Model Averaging: a method for the application of survival analysis to high-dimensional microarray data

    Directory of Open Access Journals (Sweden)

    Raftery Adrian E

    2009-02-01

    Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p

  13. Carbon partitioning as validation methods for crop yields and CO2 sequestration monitoring in Asia using a photosynthetic-sterility model

    Science.gov (United States)

    Kaneko, Daijiro; Yang, Peng; Kumakura, Toshiro

    2010-10-01

    Sustainability of world crop production and food security has become uncertain. The authors have developed an environmental research system called Remote Sensing Environmental Monitor (RSEM) for treating carbon sequestration by vegetation, grain production, desertification of Eurasian grassland, and CDM afforestation/ reforestation to a background of climate change and economic growth in rising Asian nations. The RSEM system involves vegetation photosynthesis and crop yield models for grains, including land-use classification, stomatal evaluation by surface energy fluxes, and daily monitoring for early warning. This paper presents a validation method for RSEM based on carbon partitioning in plants, focusing in particular on the effects of area sizes used in crop production statistics on carbon fixation and on sterility-based corrections to accumulated carbon sequestration values simulated using the RSEM photosynthesis model. The carbonhydrate in grains has the same chemical formula as cellulose in grain plants. The method proposed by partitioning the fixed carbon in harvested grains was used to investigate estimates of the amounts of carbon fixed, using the satellite-based RSEM model.

  14. [A Simultaneous Determination Method with Acetonitrile-n-Hexane Partitioning and Solid-Phase Extraction for Pesticide Residues in Livestock and Marine Products by GC-MS].

    Science.gov (United States)

    Yoshizaki, Mayuko; Kobayashi, Yukari; Shimizu, Masanori; Maruyama, Kouichi

    2015-01-01

    A simultaneous determination method was examined for 312 pesticides (including isomers) in muscle of livestock and marine products by GC-MS. The pesticide residues extracted from samples with acetone and n-hexane were purified by acetonitrile-n-hexane partitioning, and C18 and SAX/PSA solid-phase extraction without using GPC. Matrix components such as cholesterol were effectively removed. In recovery tests performed by this method using pork, beef, chicken and shrimp, 237-257 pesticides showed recoveries within the range of 70-120% in each sample. Validity was confirmed for 214 of the target pesticides by means of a validation test using pork. In comparison with the Japanese official method using GPC, the treatment time of samples and the quantity of solvent were reduced substantially.

  15. Source partitioning of methane emissions and its seasonality in the U.S. Midwest

    Science.gov (United States)

    Zichong Chen; Timothy J. Griffis; John M. Baker; Dylan B. Millet; Jeffrey D. Wood; Edward J. Dlugokencky; Arlyn E. Andrews; Colm Sweeney; Cheng Hu; Randall K. Kolka

    2018-01-01

    The methane (CH4) budget and its source partitioning are poorly constrained in the Midwestern United States. We used tall tower (185 m) aerodynamic flux measurements and atmospheric scale factor Bayesian inversions to constrain the monthly budget and to partition the total budget into natural (e.g., wetlands) and anthropogenic (e.g., livestock,...

  16. [On the partition of acupuncture academic schools].

    Science.gov (United States)

    Yang, Pengyan; Luo, Xi; Xia, Youbing

    2016-05-01

    Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.

  17. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  18. Bayesian Model Averaging for Propensity Score Analysis.

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2014-01-01

    This article considers Bayesian model averaging as a means of addressing uncertainty in the selection of variables in the propensity score equation. We investigate an approximate Bayesian model averaging approach based on the model-averaged propensity score estimates produced by the R package BMA but that ignores uncertainty in the propensity score. We also provide a fully Bayesian model averaging approach via Markov chain Monte Carlo sampling (MCMC) to account for uncertainty in both parameters and models. A detailed study of our approach examines the differences in the causal estimate when incorporating noninformative versus informative priors in the model averaging stage. We examine these approaches under common methods of propensity score implementation. In addition, we evaluate the impact of changing the size of Occam's window used to narrow down the range of possible models. We also assess the predictive performance of both Bayesian model averaging propensity score approaches and compare it with the case without Bayesian model averaging. Overall, results show that both Bayesian model averaging propensity score approaches recover the treatment effect estimates well and generally provide larger uncertainty estimates, as expected. Both Bayesian model averaging approaches offer slightly better prediction of the propensity score compared with the Bayesian approach with a single propensity score equation. Covariate balance checks for the case study show that both Bayesian model averaging approaches offer good balance. The fully Bayesian model averaging approach also provides posterior probability intervals of the balance indices.

  19. Goldbach Partitions and Sequences

    Indian Academy of Sciences (India)

    IAS Admin

    Properties of Goldbach partitions of numbers, as sums of primes, are presented and their potential applications to cryptography are described. The sequence of the number of partitions has excel- lent randomness properties. Goldbach partitions can be used to create ellipses and circles on the number line and they can also ...

  20. Quantifying the uncertainty in discharge data using hydraulic knowledge and uncertain gaugings: a Bayesian method named BaRatin

    Science.gov (United States)

    Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain

    2015-04-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on

  1. Forest Evapotranspiration and Energy Flux Partitioning Based on Eddy Covariance Methods in an Arid Desert Region of Northwest China

    Directory of Open Access Journals (Sweden)

    Xiaohong Ma

    2017-01-01

    Full Text Available In this study, the characteristics of energy flux partitioning and evapotranspiration of P. euphratica forests were examined in the extreme arid region of Northwest China. Energy balance closure of the ecosystem was approximately 72% (H + LE = 0.72 ∗ (Rn-G+7.72, r2=0.79, n=12095, where Rn is the net radiation, G is the soil heat flux, H is the sensible heat flux, and LE is the latent heat flux. LE was the main term of energy consumption at annual time scale because of higher value in the growing season. The ratios of the latent (LE and sensible (H heat fluxes to net radiation were 0.47 and 0.28 throughout the year, respectively. Moreover, the yearly evapotranspiration of P. euphratica forests was 744 mm year−1. And the mean daily ET was 5.09 mm·d−1 in the vibrant growing season. In particular, a small spike in the actual evapotranspiration distribution occurred during the soil ablation period due to the higher temperature and sufficient soil moisture associated with soil thawing. This period is accompanied by a series of physical processes, such as moisture transfer and heat exchange.

  2. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    Science.gov (United States)

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  3. Three-phase partitioning as a rapid and easy method for the purification and recovery of catalase from sweet potato tubers (Solanum tuberosum).

    Science.gov (United States)

    Duman, Yonca Avcı; Kaya, Erdem

    2013-07-01

    Three-phase partitioning (TPP) was used to purify and recover catalase from potato crude extract. The method consists of ammonium sulfate saturation, t-butanol addition, and adjustment of pH, respectively. The best catalase recovery (262 %) and 14.1-fold purification were seen in the interfacial phase in the presence of 40 % (w/v) ammonium sulfate saturation with 1.0:1.0 crude extract/t-butanol ratio (v/v) at pH 7 in a single step. The sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis of the enzyme showed comparatively purification and protein molecular weight was nearly found to be 56 kDa. This study shows that TPP is a simple, economical, and quick method for the recovering of catalase and can be used for the purification process.

  4. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  5. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  6. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  7. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  8. A hierarchical method for Bayesian inference of rate parameters from shock tube data: Application to the study of the reaction of hydroxyl with 2-methylfuran

    KAUST Repository

    Kim, Daesang

    2017-06-22

    We developed a novel two-step hierarchical method for the Bayesian inference of the rate parameters of a target reaction from time-resolved concentration measurements in shock tubes. The method was applied to the calibration of the parameters of the reaction of hydroxyl with 2-methylfuran, which is studied experimentally via absorption measurements of the OH radical\\'s concentration following shock-heating. In the first step of the approach, each shock tube experiment is treated independently to infer the posterior distribution of the rate constant and error hyper-parameter that best explains the OH signal. In the second step, these posterior distributions are sampled to calibrate the parameters appearing in the Arrhenius reaction model for the rate constant. Furthermore, the second step is modified and repeated in order to explore alternative rate constant models and to assess the effect of uncertainties in the reflected shock\\'s temperature. Comparisons of the estimates obtained via the proposed methodology against the common least squares approach are presented. The relative merits of the novel Bayesian framework are highlighted, especially with respect to the opportunity to utilize the posterior distributions of the parameters in future uncertainty quantification studies.

  9. 3rd Bayesian Young Statisticians Meeting

    CERN Document Server

    Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra

    2017-01-01

    This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).

  10. New phiomorph rodents from the latest Eocene of Egypt, and the impact of Bayesian “clock”-based phylogenetic methods on estimates of basal hystricognath relationships and biochronology

    Directory of Open Access Journals (Sweden)

    Hesham M. Sallam

    2016-03-01

    Full Text Available The Fayum Depression of Egypt has yielded fossils of hystricognathous rodents from multiple Eocene and Oligocene horizons that range in age from ∼37 to ∼30 Ma and document several phases in the early evolution of crown Hystricognathi and one of its major subclades, Phiomorpha. Here we describe two new genera and species of basal phiomorphs, Birkamys korai and Mubhammys vadumensis, based on rostra and maxillary and mandibular remains from the terminal Eocene (∼34 Ma Fayum Locality 41 (L-41. Birkamys is the smallest known Paleogene hystricognath, has very simple molars, and, like derived Oligocene-to-Recent phiomorphs (but unlike contemporaneous and older taxa apparently retained dP4∕4 late into life, with no evidence for P4∕4 eruption or formation. Mubhammys is very similar in dental morphology to Birkamys, and also shows no evidence for P4∕4 formation or eruption, but is considerably larger. Though parsimony analysis with all characters equally weighted places Birkamys and Mubhammys as sister taxa of extant Thryonomys to the exclusion of much younger relatives of that genus, all other methods (standard Bayesian inference, Bayesian “tip-dating,” and parsimony analysis with scaled transitions between “fixed” and polymorphic states place these species in more basal positions within Hystricognathi, as sister taxa of Oligocene-to-Recent phiomorphs. We also employ tip-dating as a means for estimating the ages of early hystricognath-bearing localities, many of which are not well-constrained by geological, geochronological, or biostratigraphic evidence. By simultaneously taking into account phylogeny, evolutionary rates, and uniform priors that appropriately encompass the range of possible ages for fossil localities, dating of tips in this Bayesian framework allows paleontologists to move beyond vague and assumption-laden “stage of evolution” arguments in biochronology to provide relatively rigorous age assessments of poorly

  11. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  12. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  13. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  14. Prediction of octanol/water partition coefficient of selected ferrocene ...

    African Journals Online (AJOL)

    water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ...

  15. Investigation of model based beamforming and Bayesian inversion signal processing methods for seismic localization of underground sources

    DEFF Research Database (Denmark)

    Oh, Geok Lian; Brunskog, Jonas

    2014-01-01

    Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique...... and frequency-wavenumber processing to determine the location of the underground tunnel. Considering the case of determining the location of an underground tunnel, this paper proposed two physical models, the acoustic approximation ray tracing model and the finite difference time domain three-dimensional (3D......) elastic wave model to represent the received seismic signal. Two localization algorithms, beamforming and Bayesian inversion, are developed for each physical model. The beam-forming algorithms implemented are the modified time-and-delay beamformer and the F-K beamformer. Inversion is posed...

  16. Using Bayesian belief networks in adaptive management.

    Science.gov (United States)

    J.B. Nyberg; B.G. Marcot; R. Sulyma

    2006-01-01

    Bayesian belief and decision networks are relatively new modeling methods that are especially well suited to adaptive-management applications, but they appear not to have been widely used in adaptive management to date. Bayesian belief networks (BBNs) can serve many purposes for practioners of adaptive management, from illustrating system relations conceptually to...

  17. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    and secondly, to gain efficiency during modification of an object oriented Bayesian network. To accomplish these two goals we have exploited a mechanism allowing local triangulation of instances to develop a method for updating the junction trees associated with object oriented Bayesian networks in highly...

  18. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  19. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  20. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  1. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens|info:eu-repo/dai/nl/304833207; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G|info:eu-repo/dai/nl/081831218

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  2. A gentle introduction to Bayesian analysis : Applications to developmental research

    NARCIS (Netherlands)

    van de Schoot, R.; Kaplan, D.; Denissen, J.J.A.; Asendorpf, J.B.; Neyer, F.J.; van Aken, M.A.G.

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First,

  3. Bayesian natural language semantics and pragmatics

    CERN Document Server

    Zeevat, Henk

    2015-01-01

    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics are based on methods from signal processing and the causal Bayesian models pioneered by especially Pearl. In signal processing, the Bayesian method finds the most probable interpretation by finding the one that maximizes the product of the prior probability and the likelihood of the interpretation. It thus stresses the importance of a production model for interpretation as in Grice's contributions to pragmatics or in interpretation by abduction.

  4. Crystal structure prediction accelerated by Bayesian optimization

    Science.gov (United States)

    Yamashita, Tomoki; Sato, Nobuya; Kino, Hiori; Miyake, Takashi; Tsuda, Koji; Oguchi, Tamio

    2018-01-01

    We propose a crystal structure prediction method based on Bayesian optimization. Our method is classified as a selection-type algorithm which is different from evolution-type algorithms such as an evolutionary algorithm and particle swarm optimization. Crystal structure prediction with Bayesian optimization can efficiently select the most stable structure from a large number of candidate structures with a lower number of searching trials using a machine learning technique. Crystal structure prediction using Bayesian optimization combined with random search is applied to known systems such as NaCl and Y2Co17 to discuss the efficiency of Bayesian optimization. These results demonstrate that Bayesian optimization can significantly reduce the number of searching trials required to find the global minimum structure by 30-40% in comparison with pure random search, which leads to much less computational cost.

  5. Integration of three strucutally different stock assessment models in a Bayesian framework

    NARCIS (Netherlands)

    Kraak, S.B.M.; Bogaards, H.; Borges, L.; Machiels, M.A.M.; Keeken, van O.A.

    2007-01-01

    Bayesian statistics provide a method for expressing uncertainty of an unknown parameter value probabilistically (www.bayesian.org). Bayesian methods have been widely used in biological sciences, and recently in fisheries science applied to stock assessment. In our previous studies on Bayesian

  6. Tag SNP selection for prediction of tick resistance in Brazilian Braford and Hereford cattle breeds using Bayesian methods.

    Science.gov (United States)

    Sollero, Bruna P; Junqueira, Vinícius S; Gomes, Cláudia C G; Caetano, Alexandre R; Cardoso, Fernando F

    2017-06-15

    Cattle resistance to ticks is known to be under genetic control with a complex biological mechanism within and among breeds. Our aim was to identify genomic segments and tag single nucleotide polymorphisms (SNPs) associated with tick-resistance in Hereford and Braford cattle. The predictive performance of a very low-density tag SNP panel was estimated and compared with results obtained with a 50 K SNP dataset. BayesB (π = 0.99) was initially applied in a genome-wide association study (GWAS) for this complex trait by using deregressed estimated breeding values for tick counts and 41,045 SNP genotypes from 3455 animals raised in southern Brazil. To estimate the combined effect of a genomic region that is potentially associated with quantitative trait loci (QTL), 2519 non-overlapping 1-Mb windows that varied in SNP number were defined, with the top 48 windows including 914 SNPs and explaining more than 20% of the estimated genetic variance for tick resistance. Subsequently, the most informative SNPs were selected based on Bayesian parameters (model frequency and t-like statistics), linkage disequilibrium and minor allele frequency to propose a very low-density 58-SNP panel. Some of these tag SNPs mapped close to or within genes and pseudogenes that are functionally related to tick resistance. Prediction ability of this SNP panel was investigated by cross-validation using K-means and random clustering and a BayesA model to predict direct genomic values. Accuracies from these cross-validations were 0.27 ± 0.09 and 0.30 ± 0.09 for the K-means and random clustering groups, respectively, compared to respective values of 0.37 ± 0.08 and 0.43 ± 0.08 when using all 41,045 SNPs and BayesB with π = 0.99, or of 0.28 ± 0.07 and 0.40 ± 0.08 with π = 0.999. Bayesian GWAS model parameters can be used to select tag SNPs for a very low-density panel, which will include SNPs that are potentially linked to functional genes. It can be useful for cost

  7. Application of integrated Bayesian modeling and Markov chain Monte Carlo methods to the conservation of a harvested species

    Directory of Open Access Journals (Sweden)

    Fonnesbeck, C. J.

    2004-06-01

    Full Text Available When endeavoring to make informed decisions, conservation biologists must frequently contend with disparate sources of data and competing hypotheses about the likely impacts of proposed decisions on the resource status. Frequently, statistical analyses, modeling (e.g., for population projection and optimization or simulation are conducted as separate exercises. For example, a population model might be constructed, whose parameters are then estimated from data (e.g., ringing studies, population surveys. This model might then be used to predict future population states, from current population estimates, under a particular management regime. Finally, the parameterized model might also be used to evaluate alternative candidate management decisions, via simulation, optimization, or both. This approach, while effective, does not take full advantage of the integration of data and model components for prediction and updating; we propose a hierarchical Bayesian context for this integration. In the case of American black ducks (Anas rubripes, managers are simultaneously faced with trying to extract a sustainable harvest from the species, while maintaining individual stocks above acceptable thresholds. The problem is complicated by spatial heterogeneity in the growth rates and carrying capacity of black ducks stocks, movement between stocks, regional differences in the intensity of harvest pressure, and heterogeneity in the degree of competition from a close congener, mallards (Anas platyrynchos among stocks. We have constructed a population life cycle model that takes these components into account and simultaneously performs parameter estimation and population prediction in a Bayesian framework. Ringing data are used to develop posterior predictive distributions for harvest mortality rates, given as input decisions about harvest regulations. Population surveys of black ducks and mallards are used to obtain stock-specific estimates of population size for

  8. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  9. Effective extraction method for dioxin analysis from lipid-rich biological matrices using a combination of pressurized liquid extraction and dimethyl sulfoxide/acetonitrile/hexane partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Kitamura, Kimiyoshi; Takazawa, Yoshikatsu; Hashimoto, Shunji; Choi, Jae-Won; Ito, Hiroyasu; Morita, Masatoshi

    2004-06-04

    For the analysis of dioxins (i.e. PCDDs/Fs, polychlorinated dibenzo-p-dioxins/dibenzofurans, and Co-PCBs, coplanar polychlorinated biphenyls) in lipid-rich biological matrices, we examined the potential of a novel extraction method, consisting of a combination of pressurized liquid extraction (PLE) using 1:9 (v/v) dimethyl sulfoxide (DMSO)/acetonitrile (1:9, v/v) as solvent and DMSO/acetonitrile/hexane partitioning. This method could potentially reduce the large amount of lipids typically generated in the extraction of dioxins. Our cleanup procedure, using tandem multilayer silica gel-activated carbon (MLS-AC) column chromatography, a simplification of the conventional method, was capable of separating mono-ortho-PCBs from non-ortho-PCBs/PCDDs/Fs in half the time required for the conventional method. The optimal conditions for PLE common to all solvents used in this investigation were 2000 psi and {>=}180 degree sign C. The amount of lipid extracted was approximately 1/100 of that extracted using acetone/hexane (1:1, v/v), making sulfuric acid treatment unnecessary. In both meat and fecal matrices, dioxin congener levels extracted by this method were almost identical to those obtained by conventional solvent extraction methods, such as those employing acetone/hexane or toluene. Moreover, the R.S.D.s of dioxins extracted by this method were <15%, as well as those obtained by conventional techniques. Our new method was advantageous in shortening removing lipid procedure to 2-3 h.

  10. Estudo da prevalência da tuberculose: uso de métodos bayesianos Study of the prevalence of tuberculosis using Bayesian methods

    Directory of Open Access Journals (Sweden)

    Jorge Alberto Achcar

    2003-12-01

    Full Text Available Neste artigo, apresentamos estimadores bayesianos para a prevalência de tuberculose usando métodos computacionais de simulação de amostras da distribuição a posteriori de interesse. Em especial, consideramos o uso do amostrador de Gibbs para simular amostras da distribuição a posteriori, e daí encontramos, em uma forma simples, inferências precisas para a prevalência de tuberculose. Em uma aplicação, analisamos os resultados do exame de Rx do tórax no diagnóstico da tuberculose. Com essa aplicação, verificamos que os estimadores bayesianos são simples de se obter e apresentam grande precisão. O uso de métodos computacionais para simulação de amostras como o caso do amostrador de Gibbs tem sido recentemente muito utilizado para análise bayesiana de modelos em bioestatística. Essas técnicas de simulação usando o amostrador de Gibbs são facilmente implementadas e não exigem muito conhecimento computacional, podendo ser programadas em qualquer software disponível. Além disso, essas técnicas podem ser consideradas para o estudo da prevalência de outras doenças.In this paper we present Bayesian estimators of the prevalence of tuberculosis using computational methods for simulation of samples of posterior distribution of interest. We especially considered the Gibbs sampling algorithm to generate samples of posterior distribution, and from these samples we obtained accurate inferences for the prevalence of tuberculosis. In an application, we analyzed the results of lung X-ray tests in the diagnosis of tuberculosis. With this application, we verified that Bayesian estimators are more accurate than some existing estimators usually considered by health researchers. The use of computational methods for simulation of samples as the case of the Gibbs sampling algorithm is becoming very popular for Bayesian analysis in biostatistics. These simulation techniques using the Gibbs sampling algorithm are easily implemented and do

  11. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  12. Bayesian variable selection in regression

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T.J.; Beauchamp, J.J.

    1987-01-01

    This paper is concerned with the selection of subsets of ''predictor'' variables in a linear regression model for the prediction of a ''dependent'' variable. We take a Bayesian approach and assign a probability distribution to the dependent variable through a specification of prior distributions for the unknown parameters in the regression model. The appropriate posterior probabilities are derived for each submodel and methods are proposed for evaluating the family of prior distributions. Examples are given that show the application of the Bayesian methodology. 23 refs., 3 figs.

  13. Partitioning of oxygen uptake between the gills and skin in fish larvae: a novel method for estimating cutaneous oxygen uptake.

    Science.gov (United States)

    Rombough, P J

    1998-06-01

    The goal of this study was to develop an alternative to the traditional rubber dam method for measuring cutaneous oxygen uptake in bimodally respiring (skin + gills) fish larvae. The method tested involved using microelectrodes to measure the PO2 gradient in the diffusive boundary layer adjacent to seven positions on the skin surface (one on the head, two on the yolk sac, two on the trunk, one at the base of the dorsal fin-fold and one on the proximal portion of the caudal fin-fold) of rainbow trout (Oncorhynchus mykiss) larvae in still water. The PO2 gradient (deltaPO2/delta x, where x is the distance from the skin surface) was then used to calculate area-specific rate of O2 uptake (.MO2/A) according to the Fick equation, .MO2/A=Dbeta(deltaPO2/deltax), where A is the cross-sectional area of the boundary layer, D is the diffusion coefficient and beta is the capacitance coefficient for O2 in water. The accuracy of the method was assessed by comparing it with the rubber dam method. After correcting for differences in body mass, the two methods gave essentially identical results. According to the boundary layer method, the mean (+/-95 % CI) rate of O2 uptake across the skin of newly hatched rainbow trout at 10 degrees C is 3.13+/-0.18 microg O2 cm-2h-1 (N=265). The corresponding value obtained using the rubber dam method was 3. 36+/-0.35 microg O2 cm-2 h-1 (N=27). The advantages of the boundary layer method are that it can be used with smaller, more delicate larvae and that variables, such as flow rate, that can affect the efficiency of gas exchange can be regulated more precisely. The boundary layer method also permits examination of regional differences in exchange efficiency, although in still water such differences do not appear to be significant in trout larvae. The mean steepness of the PO2 gradient in the boundary layer and, hence, the mean rate of area-specific O2 uptake were essentially the same (P>0.05) at all seven locations tested in this study. The

  14. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M. (Peterson AFB, CO); Ray, J. P.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.

  15. Advancing Dose-Response Assessment Methods for Environmental Regulatory Impact Analysis: A Bayesian Belief Network Approach Applied to Inorganic Arsenic.

    Science.gov (United States)

    Zabinski, Joseph W; Garcia-Vargas, Gonzalo; Rubio-Andrade, Marisela; Fry, Rebecca C; Gibson, Jacqueline MacDonald

    2016-05-10

    Dose-response functions used in regulatory risk assessment are based on studies of whole organisms and fail to incorporate genetic and metabolomic data. Bayesian belief networks (BBNs) could provide a powerful framework for incorporating such data, but no prior research has examined this possibility. To address this gap, we develop a BBN-based model predicting birthweight at gestational age from arsenic exposure via drinking water and maternal metabolic indicators using a cohort of 200 pregnant women from an arsenic-endemic region of Mexico. We compare BBN predictions to those of prevailing slope-factor and reference-dose approaches. The BBN outperforms prevailing approaches in balancing false-positive and false-negative rates. Whereas the slope-factor approach had 2% sensitivity and 99% specificity and the reference-dose approach had 100% sensitivity and 0% specificity, the BBN's sensitivity and specificity were 71% and 30%, respectively. BBNs offer a promising opportunity to advance health risk assessment by incorporating modern genetic and metabolomic data.

  16. Use of Bayesian Methods to Analyze and Visualize Content Uniformity Capability Versus United States Pharmacopeia and ASTM Standards.

    Science.gov (United States)

    Hofer, Jeffrey D; Rauk, Adam P

    2017-02-01

    The purpose of this work was to develop a straightforward and robust approach to analyze and summarize the ability of content uniformity data to meet different criteria. A robust Bayesian statistical analysis methodology is presented which provides a concise and easily interpretable visual summary of the content uniformity analysis results. The visualization displays individual batch analysis results and shows whether there is high confidence that different content uniformity criteria could be met a high percentage of the time in the future. The 3 tests assessed are as follows: (a) United States Pharmacopeia Uniformity of Dosage Units , (b) a specific ASTM E2810 Sampling Plan 1 criterion to potentially be used for routine release testing, and (c) another specific ASTM E2810 Sampling Plan 2 criterion to potentially be used for process validation. The approach shown here could readily be used to create similar result summaries for other potential criteria. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  17. New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes

    International Nuclear Information System (INIS)

    Karras, D A; Mertzios, G B

    2009-01-01

    A novel approach is presented in this paper for improving anisotropic diffusion PDE models, based on the Perona–Malik equation. A solution is proposed from an engineering perspective to adaptively estimate the parameters of the regularizing function in this equation. The goal of such a new adaptive diffusion scheme is to better preserve edges when the anisotropic diffusion PDE models are applied to image enhancement tasks. The proposed adaptive parameter estimation in the anisotropic diffusion PDE model involves self-organizing maps and Bayesian inference to define edge probabilities accurately. The proposed modifications attempt to capture not only simple edges but also difficult textural edges and incorporate their probability in the anisotropic diffusion model. In the context of the application of PDE models to image processing such adaptive schemes are closely related to the discrete image representation problem and the investigation of more suitable discretization algorithms using constraints derived from image processing theory. The proposed adaptive anisotropic diffusion model illustrates these concepts when it is numerically approximated by various discretization schemes in a database of magnetic resonance images (MRI), where it is shown to be efficient in image filtering and restoration applications

  18. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  19. The Bayesian Approach to Association

    Science.gov (United States)

    Arora, N. S.

    2017-12-01

    The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this

  20. Estimation of daily global solar radiation in Vietnamese Mekong Delta area: A combinational application of statistical downscaling method and Bayesian inference

    Science.gov (United States)

    Iizumi, T.; Nishimori, M.; Yokozawa, M.; Kotera, A.; Khang, N. D.

    2008-12-01

    Long-term daily global solar radiation (GSR) data of the same quality in the 20th century has been needed as a baseline to assess the climate change impact on paddy rice production in Vietnamese Mekong Delta area (MKD: 104.5-107.5oE/8.2-11.2oN). However, though sunshine duration data is available, the accessibility of GSR data is quite poor in MKD. This study estimated the daily GSR in MKD for 30-yr (1978- 2007) by applying the statistical downscaling method (SDM). The estimates of GSR was obtained from four different sources: (1) the combined equations with the corrected reanalysis data of daily maximum/minimum temperatures, relative humidity, sea level pressure, and precipitable water; (2) the correction equation with the reanalysis data of downward shortwave radiation; (3) the empirical equation with the observed sunshine duration; and (4) the observation at one site for short term. Three reanalysis data, i.e., NCEP-R1, ERA-40, and JRA-25, were used. Also the observed meteorological data, which includes many missing data, were obtained from 11 stations of the Vietnamese Meteorological Agency for 28-yr and five stations of the Global Summary of the Day for 30-yr. The observed GSR data for 1-yr was obtained from our station. Considering the use of data with many missing data for analysis, the Bayesian inference was used for this study, which has the powerful capability to optimize multiple parameters in a non-linear and hierarchical model. The Bayesian inference provided the posterior distributions of 306 parameter values relating to the combined equations, the empirical equation, and the correction equation. The preliminary result shows that the amplitude of daily fluctuation of modeled GSR was underestimated by the empirical equation and the correction equation. The combination of SDM and Bayesian inference has a potential to estimate the long- term daily GSR of the same quality even though in the area where the observed data is quite limited.

  1. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  2. Cation solvation with quantum chemical effects modeled by a size-consistent multi-partitioning quantum mechanics/molecular mechanics method.

    Science.gov (United States)

    Watanabe, Hiroshi C; Kubillus, Maximilian; Kubař, Tomáš; Stach, Robert; Mizaikoff, Boris; Ishikita, Hiroshi

    2017-07-21

    In the condensed phase, quantum chemical properties such as many-body effects and intermolecular charge fluctuations are critical determinants of the solvation structure and dynamics. Thus, a quantum mechanical (QM) molecular description is required for both solute and solvent to incorporate these properties. However, it is challenging to conduct molecular dynamics (MD) simulations for condensed systems of sufficient scale when adapting QM potentials. To overcome this problem, we recently developed the size-consistent multi-partitioning (SCMP) quantum mechanics/molecular mechanics (QM/MM) method and realized stable and accurate MD simulations, using the QM potential to a benchmark system. In the present study, as the first application of the SCMP method, we have investigated the structures and dynamics of Na + , K + , and Ca 2+ solutions based on nanosecond-scale sampling, a sampling 100-times longer than that of conventional QM-based samplings. Furthermore, we have evaluated two dynamic properties, the diffusion coefficient and difference spectra, with high statistical certainty. Furthermore the calculation of these properties has not previously been possible within the conventional QM/MM framework. Based on our analysis, we have quantitatively evaluated the quantum chemical solvation effects, which show distinct differences between the cations.

  3. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  4. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Science.gov (United States)

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  5. Performance of the 'material Failure Forecast Method' in real-time situations: A Bayesian approach applied on effusive and explosive eruptions

    Science.gov (United States)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.; Arámbula-Mendoza, R.; Budi-Santoso, A.

    2016-11-01

    Most attempts of deterministic eruption forecasting are based on the material Failure Forecast Method (FFM). This method assumes that a precursory observable, such as the rate of seismic activity, can be described by a simple power law which presents a singularity at a time close to the eruption onset. Until now, this method has been applied only in a small number of cases, generally for forecasts in hindsight. In this paper, a rigorous Bayesian approach of the FFM designed for real-time applications is applied. Using an automatic recognition system, seismo-volcanic events are detected and classified according to their physical mechanism and time series of probability distributions of the rates of events are calculated. At each time of observation, a Bayesian inversion provides estimations of the exponent of the power law and of the time of eruption, together with their probability density functions. Two criteria are defined in order to evaluate the quality and reliability of the forecasts. Our automated procedure has allowed the analysis of long, continuous seismic time series: 13 years from Volcán de Colima, Mexico, 10 years from Piton de la Fournaise, Reunion Island, France, and several months from Merapi volcano, Java, Indonesia. The new forecasting approach has been applied to 64 pre-eruptive sequences which present various types of dominant seismic activity (volcano-tectonic or long-period events) and patterns of seismicity with different level of complexity. This has allowed us to test the FFM assumptions, to determine in which conditions the method can be applied, and to quantify the success rate of the forecasts. 62% of the precursory sequences analysed are suitable for the application of FFM and half of the total number of eruptions are successfully forecast in hindsight. In real-time, the method allows for the successful forecast of 36% of all the eruptions considered. Nevertheless, real-time forecasts are successful for 83% of the cases that fulfil the

  6. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  7. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Energy Technology Data Exchange (ETDEWEB)

    Rohée, E. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Coulon, R., E-mail: romain.coulon@cea.fr [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Carrel, F. [CEA, LIST, Laboratoire Capteurs et Architectures Electroniques, F-91191 Gif-sur-Yvette (France); Dautremer, T.; Barat, E.; Montagu, T. [CEA, LIST, Laboratoire de Modélisation et Simulation des Systèmes, F-91191 Gif-sur-Yvette (France); Normand, S. [CEA, DAM, Le Ponant, DPN/STXN, F-75015 Paris (France); Jammes, C. [CEA, DEN, Cadarache, DER/SPEx/LDCI, F-13108 Saint-Paul-lez-Durance (France)

    2016-11-11

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  8. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    International Nuclear Information System (INIS)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-01-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on “iterative peak fitting deconvolution” method and a “nonparametric Bayesian deconvolution” approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  9. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    Science.gov (United States)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  10. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol

    2005-01-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few researches on a ranking of

  11. A Bayesian Method to Apply the Results of Multiple-Event Seismic Location to a Subsequent Event

    Science.gov (United States)

    Johannesson, G.; Myers, S. C.

    2014-12-01

    BayesLoc is a Bayesian multiple-event seismic locator that uses a Markov chain Monte Carlo (MCMC) algorithm to sample possible seismic hypocenters, travel-time corrections, and the precision of observed arrival data (absolute picks and differential times based on cross-correlated waveforms). By simultaneously locating multiple seismic events, regional biases in the assumed travel-time model (e.g., ak135) can be estimated and corrected for, and data from different seismic stations and phases can be weighted to reflect their accuracy/precision for an event cluster. As such, multiple-event locators generally yield more accurate locations than single-event locators, which lack the data to resolve the underlying travel-time model and adaptively "weight" the arrival data differently for each station and phase. On the other hand, single-event locators are computationally more attractive, making them more suitable for rapid (realtime) location of seismic activity. We present a novel approach to approximate the location accuracy of the BayesLoc multiple-event analysis at a computational cost that is comparable to BayesLoc single-event analysis. The proposed approach consists of two steps: a precomputed multiple-event training analysis and subsequent real-time, single-event location for new events. The precomputed training analsysis consists of carrying out a multiple-event BayesLoc run in a given target event cluster, yielding a posterior sample of travel-time corrections and weights. Given a new event in the vicinity of the training cluster, a BayesLoc single-event run is carried out which samples the travel-time corrections and weights from the multiple-event training run. Hence, it has all the benefits of the multiple-event run at the cost of a single-event run. We present the theoretical underpinnings of the new approach and we compare event location results for the full multiple-event, single-event, and the new approaches. This work was performed under the auspices of

  12. An overview on Approximate Bayesian computation*

    Directory of Open Access Journals (Sweden)

    Baragatti Meïli

    2014-01-01

    Full Text Available Approximate Bayesian computation techniques, also called likelihood-free methods, are one of the most satisfactory approach to intractable likelihood problems. This overview presents recent results since its introduction about ten years ago in population genetics.

  13. An optimized method to identify RR Lyrae stars in the SDSS×Pan-STARRS1 overlapping area using a bayesian generative technique

    International Nuclear Information System (INIS)

    Abbas, Mohamad; Grebel, Eva K.; Martin, N. F.; Kaiser, N.; Burgett, W. S.; Huber, M. E.; Waters, C.

    2014-01-01

    We present a method for selecting RR Lyrae (RRL) stars (or other types of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3π survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ∼77% and ∼52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completeness and efficiency levels will further improve with the additional PS1 epochs (∼3 epochs per filter) that will be observed before the conclusion of the survey. A comparison between our efficiency and completeness levels using the GMM method to the efficiency and completeness levels using rectangular cuts that are commonly used yielded a significant increase in the efficiency level from ∼13% to ∼77% and an insignificant change in the completeness levels. Hence, we favor using the GMM technique in future studies. Although we develop it over the SDSS×PS1 footprint, the technique presented here would work well on any multi-band, multi-epoch survey for which the number of epochs is limited.

  14. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  15. Convex Regression with Interpretable Sharp Partitions.

    Science.gov (United States)

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-06-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.

  16. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  18. Bayesian analysis for the social sciences

    CERN Document Server

    Jackman, Simon

    2009-01-01

    Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.

  19. Polytomies and Bayesian phylogenetic inference.

    Science.gov (United States)

    Lewis, Paul O; Holder, Mark T; Holsinger, Kent E

    2005-04-01

    Bayesian phylogenetic analyses are now very popular in systematics and molecular evolution because they allow the use of much more realistic models than currently possible with maximum likelihood methods. There are, however, a growing number of examples in which large Bayesian posterior clade probabilities are associated with very short branch lengths and low values for non-Bayesian measures of support such as nonparametric bootstrapping. For the four-taxon case when the true tree is the star phylogeny, Bayesian analyses become increasingly unpredictable in their preference for one of the three possible resolved tree topologies as data set size increases. This leads to the prediction that hard (or near-hard) polytomies in nature will cause unpredictable behavior in Bayesian analyses, with arbitrary resolutions of the polytomy receiving very high posterior probabilities in some cases. We present a simple solution to this problem involving a reversible-jump Markov chain Monte Carlo (MCMC) algorithm that allows exploration of all of tree space, including unresolved tree topologies with one or more polytomies. The reversible-jump MCMC approach allows prior distributions to place some weight on less-resolved tree topologies, which eliminates misleadingly high posteriors associated with arbitrary resolutions of hard polytomies. Fortunately, assigning some prior probability to polytomous tree topologies does not appear to come with a significant cost in terms of the ability to assess the level of support for edges that do exist in the true tree. Methods are discussed for applying arbitrary prior distributions to tree topologies of varying resolution, and an empirical example showing evidence of polytomies is analyzed and discussed.

  20. Bayesian supervised dimensionality reduction.

    Science.gov (United States)

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  1. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  2. New Bayesian inference method using two steps of Markov chain Monte Carlo and its application to shock tube experiment data of Furan oxidation

    KAUST Repository

    Kim, Daesang

    2016-01-06

    A new Bayesian inference method has been developed and applied to Furan shock tube experimental data for efficient statistical inferences of the Arrhenius parameters of two OH radical consumption reactions. The collected experimental data, which consist of time series signals of OH radical concentrations of 14 shock tube experiments, may require several days for MCMC computations even with the support of a fast surrogate of the combustion simulation model, while the new method reduces it to several hours by splitting the process into two steps of MCMC: the first inference of rate constants and the second inference of the Arrhenius parameters. Each step has low dimensional parameter spaces and the second step does not need the executions of the combustion simulation. Furthermore, the new approach has more flexibility in choosing the ranges of the inference parameters, and the higher speed and flexibility enable the more accurate inferences and the analyses of the propagation of errors in the measured temperatures and the alignment of the experimental time to the inference results.

  3. The Benefits of Adaptive Partitioning for Parallel AMR Applications

    Energy Technology Data Exchange (ETDEWEB)

    Steensland, Johan [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Advanced Software Research and Development

    2008-07-01

    Parallel adaptive mesh refinement methods potentially lead to realistic modeling of complex three-dimensional physical phenomena. However, the dynamics inherent in these methods present significant challenges in data partitioning and load balancing. Significant human resources, including time, effort, experience, and knowledge, are required for determining the optimal partitioning technique for each new simulation. In reality, scientists resort to using the on-board partitioner of the computational framework, or to using the partitioning industry standard, ParMetis. Adaptive partitioning refers to repeatedly selecting, configuring and invoking the optimal partitioning technique at run-time, based on the current state of the computer and application. In theory, adaptive partitioning automatically delivers superior performance and eliminates the need for repeatedly spending valuable human resources for determining the optimal static partitioning technique. In practice, however, enabling frameworks are non-existent due to the inherent significant inter-disciplinary research challenges. This paper presents a study of a simple implementation of adaptive partitioning and discusses implied potential benefits from the perspective of common groups of users within computational science. The study is based on a large set of data derived from experiments including six real-life, multi-time-step adaptive applications from various scientific domains, five complementing and fundamentally different partitioning techniques, a large set of parameters corresponding to a wide spectrum of computing environments, and a flexible cost function that considers the relative impact of multiple partitioning metrics and diverse partitioning objectives. The results show that even a simple implementation of adaptive partitioning can automatically generate results statistically equivalent to the best static partitioning. Thus, it is possible to effectively eliminate the problem of determining the

  4. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    Coolen, F.P.A.

    1997-01-01

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  5. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...

  6. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    Science.gov (United States)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  7. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  8. The Bayesian Score Statistic

    NARCIS (Netherlands)

    Kleibergen, F.R.; Kleijn, R.; Paap, R.

    2000-01-01

    We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike

  9. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  10. Comparison of salting-out and sugaring-out liquid-liquid extraction methods for the partition of 10-hydroxy-2-decenoic acid in royal jelly and their co-extracted protein content.

    Science.gov (United States)

    Tu, Xijuan; Sun, Fanyi; Wu, Siyuan; Liu, Weiyi; Gao, Zhaosheng; Huang, Shaokang; Chen, Wenbin

    2018-01-15

    Homogeneous liquid-liquid extraction (h-LLE) has been receiving considerable attention as a sample preparation method due to its simple and fast partition of compounds with a wide range of polarities. To better understand the differences between the two h-LLE extraction approaches, salting-out assisted liquid-liquid extraction (SALLE) and sugaring-out assisted liquid-liquid extraction (SULLE), have been compared for the partition of 10-hydroxy-2-decenoic acid (10-HDA) from royal jelly, and for the co-extraction of proteins. Effects of the amount of phase partition agents and the concentration of acetonitrile (ACN) on the h-LLE were discussed. Results showed that partition efficiency of 10-HDA depends on the phase ratio in both SALLE and SULLE. Though the partition triggered by NaCl and glucose is less efficient than MgSO 4 in the 50% (v/v) ACN-water mixture, their extraction yields can be improved to be similar with that in MgSO 4 SALLE by increasing the initial concentration of ACN in the ACN-water mixture. The content of co-extracted protein was correlated with water concentration in the obtained upper phase. MgSO 4 showed the largest protein co-extraction at the low concentration of salt. Glucose exhibited a large protein co-extraction in the high phase ratio condition. Furthermore, NaCl with high initial ACN concentration is recommended because it produced high extraction yield for 10-HDA and the lowest amount of co-extracted protein. These observations would be valuable for the sample preparation of royal jelly. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography; Methodes bayesiennes pour la restauration et la reconstruction d`images application a la gammagraphie et a la tomographie par photofissions

    Energy Technology Data Exchange (ETDEWEB)

    Stawinski, G

    1998-10-26

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  12. Bayesian networks for management of industrial risk

    International Nuclear Information System (INIS)

    Munteanu, P.; Debache, G.; Duval, C.

    2008-01-01

    This article presents the outlines of Bayesian networks modelling and argues for their interest in the probabilistic studies of industrial risk and reliability. A practical case representative of this type of study is presented in support of the argumentation. The article concludes on some research tracks aiming at improving the performances of the methods relying on Bayesian networks and at widening their application area in risk management. (authors)

  13. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  14. Re-examining mortality sources and population trends in a declining seabird: using Bayesian methods to incorporate existing information and new data.

    Directory of Open Access Journals (Sweden)

    Tim Reid

    Full Text Available The population of flesh-footed shearwaters (Puffinus carneipes breeding on Lord Howe Island was shown to be declining from the 1970's to the early 2000's. This was attributed to destruction of breeding habitat and fisheries mortality in the Australian Eastern Tuna and Billfish Fishery. Recent evidence suggests these impacts have ceased; presumably leading to population recovery. We used Bayesian statistical methods to combine data from the literature with more recent, but incomplete, field data to estimate population parameters and trends. This approach easily accounts for sources of variation and uncertainty while formally incorporating data and variation from different sources into the estimate. There is a 70% probability that the flesh-footed shearwater population on Lord Howe continued to decline during 2003-2009, and a number of possible reasons for this are suggested. During the breeding season, road-based mortality of adults on Lord Howe Island is likely to result in reduced adult survival and there is evidence that breeding success is negatively impacted by marine debris. Interactions with fisheries on flesh-footed shearwater winter grounds should be further investigated.

  15. Water Environmental Capacity Analysis of Taihu Lake and Parameter Estimation Based on the Integration of the Inverse Method and Bayesian Modeling

    Directory of Open Access Journals (Sweden)

    Ranran Li

    2015-09-01

    Full Text Available An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN and total phosphorus (TP were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.

  16. Age-period-cohort analysis of cervical cancer incidence in Hong Kong from 1972 to 2001 using maximum likelihood and Bayesian methods.

    Science.gov (United States)

    Leung, Gabriel M; Woo, Pauline P S; McGhee, Sarah M; Cheung, Annie N Y; Fan, Susan; Mang, Oscar; Thach, Thuan Q; Ngan, Hextan Y S

    2006-08-01

    To examine the secular effects of opportunistic screening for cervical cancer in a rich, developed community where most other such populations have long adopted organised screening. The analysis was based on 15 140 cases of invasive cervical cancer from 1972 to 2001. The effects of chronological age, time period, and birth cohort were decomposed using both maximum likelihood and Bayesian methods. The overall age adjusted incidence decreased from 24.9 in 1972-74 to 9.5 per 100,000 in 1999-2001, in a log-linear fashion, yielding an average annual reduction of 4.0% (p1920s cohort curve representing an age-period interaction masquerading as a cohort change that denotes the first availability of Pap testing during the 1960s concentrated among women in their 40s; (2) a hook around the calendar years 1982-83 when cervical cytology became a standard screening test for pregnant women. Hong Kong's cervical cancer rates have declined since Pap tests first became available in the 1960s, most probably because of increasing population coverage over time and in successive generations in a haphazard fashion and punctuated by the systematic introduction of routine cytology as part of antenatal care in the 1980s.

  17. Water Environmental Capacity Analysis of Taihu Lake and Parameter Estimation Based on the Integration of the Inverse Method and Bayesian Modeling.

    Science.gov (United States)

    Li, Ranran; Zou, Zhihong

    2015-09-29

    An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC) analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN) and total phosphorus (TP) were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC) in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.

  18. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    Science.gov (United States)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p corrupted with spillover.

  19. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    Science.gov (United States)

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  20. PAQ: Partition Analysis of Quasispecies.

    Science.gov (United States)

    Baccam, P; Thompson, R J; Fedrigo, O; Carpenter, S; Cornette, J L

    2001-01-01

    The complexities of genetic data may not be accurately described by any single analytical tool. Phylogenetic analysis is often used to study the genetic relationship among different sequences. Evolutionary models and assumptions are invoked to reconstruct trees that describe the phylogenetic relationship among sequences. Genetic databases are rapidly accumulating large amounts of sequences. Newly acquired sequences, which have not yet been characterized, may require preliminary genetic exploration in order to build models describing the evolutionary relationship among sequences. There are clustering techniques that rely less on models of evolution, and thus may provide nice exploratory tools for identifying genetic similarities. Some of the more commonly used clustering methods perform better when data can be grouped into mutually exclusive groups. Genetic data from viral quasispecies, which consist of closely related variants that differ by small changes, however, may best be partitioned by overlapping groups. We have developed an intuitive exploratory program, Partition Analysis of Quasispecies (PAQ), which utilizes a non-hierarchical technique to partition sequences that are genetically similar. PAQ was used to analyze a data set of human immunodeficiency virus type 1 (HIV-1) envelope sequences isolated from different regions of the brain and another data set consisting of the equine infectious anemia virus (EIAV) regulatory gene rev. Analysis of the HIV-1 data set by PAQ was consistent with phylogenetic analysis of the same data, and the EIAV rev variants were partitioned into two overlapping groups. PAQ provides an additional tool which can be used to glean information from genetic data and can be used in conjunction with other tools to study genetic similarities and genetic evolution of viral quasispecies.

  1. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    Science.gov (United States)

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  3. Systematics and morphological evolution within the moss family Bryaceae: a comparison between parsimony and Bayesian methods for reconstruction of ancestral character states.

    Science.gov (United States)

    Pedersen, Niklas; Holyoak, David T; Newton, Angela E

    2007-06-01

    The Bryaceae are a large cosmopolitan moss family including genera of significant morphological and taxonomic complexity. Phylogenetic relationships within the Bryaceae were reconstructed based on DNA sequence data from all three genomic compartments. In addition, maximum parsimony and Bayesian inference were employed to reconstruct ancestral character states of 38 morphological plus four habitat characters and eight insertion/deletion events. The recovered phylogenetic patterns are generally in accord with previous phylogenies based on chloroplast DNA sequence data and three major clades are identified. The first clade comprises Bryum bornholmense, B. rubens, B. caespiticium, and Plagiobryum. This corroborates the hypothesis suggested by previous studies that several Bryum species are more closely related to Plagiobryum than to the core Bryum species. The second clade includes Acidodontium, Anomobryum, and Haplodontium, while the third clade contains the core Bryum species plus Imbribryum. Within the latter clade, B. subapiculatum and B. tenuisetum form the sister clade to Imbribryum. Reconstructions of ancestral character states under maximum parsimony and Bayesian inference suggest fourteen morphological synapomorphies for the ingroup and synapomorphies are detected for most clades within the ingroup. Maximum parsimony and Bayesian reconstructions of ancestral character states are mostly congruent although Bayesian inference shows that the posterior probability of ancestral character states may decrease dramatically when node support is taken into account. Bayesian inference also indicates that reconstructions may be ambiguous at internal nodes for highly polymorphic characters.

  4. Implementing the Bayesian paradigm in risk analysis

    International Nuclear Information System (INIS)

    Aven, T.; Kvaloey, J.T.

    2002-01-01

    The Bayesian paradigm comprises a unified and consistent framework for analyzing and expressing risk. Yet, we see rather few examples of applications where the full Bayesian setting has been adopted with specifications of priors of unknown parameters. In this paper, we discuss some of the practical challenges of implementing Bayesian thinking and methods in risk analysis, emphasizing the introduction of probability models and parameters and associated uncertainty assessments. We conclude that there is a need for a pragmatic view in order to 'successfully' apply the Bayesian approach, such that we can do the assignments of some of the probabilities without adopting the somewhat sophisticated procedure of specifying prior distributions of parameters. A simple risk analysis example is presented to illustrate ideas

  5. Matrix string partition function

    CERN Document Server

    Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre

    1998-01-01

    We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.

  6. Plane partition vesicles

    International Nuclear Information System (INIS)

    Rensburg, E J Janse van; Ma, J

    2006-01-01

    We examine partitions and their natural three-dimensional generalizations, plane partitions, as models of vesicles undergoing an inflation-deflation transition. The phase diagrams of these models include a critical point corresponding to an inflation-deflation transition, and exhibits multicritical scaling in the vicinity of a multicritical point located elsewhere on the critical curve. We determine the locations of the multicritical points by analysing the generating functions using analytic and numerical means. In addition, we determine the numerical values of the multicritical scaling exponents associated with the multicritical scaling regimes in these models

  7. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  8. Modeling framework for representing long-term effectiveness of best management practices in addressing hydrology and water quality problems: Framework development and demonstration using a Bayesian method

    Science.gov (United States)

    Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta

    2018-05-01

    Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.

  9. A Bayesian Method to Quantify Azimuthal Anisotropy Model Uncertainties: Application to Global Azimuthal Anisotropy in the Upper Mantle and Transition Zone

    Science.gov (United States)

    Yuan, K.; Beghein, C.

    2018-01-01

    Seismic anisotropy is a powerful tool to constrain mantle deformation, but its existence in the deep upper mantle and topmost lower mantle is still uncertain. Recent results from higher mode Rayleigh waves have, however, revealed the presence of 1 % azimuthal anisotropy between 300 km and 800 km depth, and changes in azimuthal anisotropy across the mantle transition zone boundaries. This has important consequences for our understanding of mantle convection patterns and deformation of deep mantle material. Here, we propose a Bayesian method to model depth variations in azimuthal anisotropy and to obtain quantitative uncertainties on the fast seismic direction and anisotropy amplitude from phase velocity dispersion maps. We applied this new method to existing global fundamental and higher mode Rayleigh wave phase velocity maps to assess the likelihood of azimuthal anisotropy in the deep upper mantle and to determine whether previously detected changes in anisotropy at the transition zone boundaries are robustly constrained by those data. Our results confirm that deep upper mantle azimuthal anisotropy is favored and well-constrained by the higher mode data employed. The fast seismic directions are in agreement with our previously published model. The data favor a model characterized, on average, by changes in azimuthal anisotropy at the top and bottom of the transition zone. However, this change in fast axes is not a global feature as there are regions of the model where the azimuthal anisotropy direction is unlikely to change across depths in the deep upper mantle. We were, however, unable to detect any clear pattern or connection with surface tectonics. Future studies will be needed to further improve the lateral resolution of this type of model at transition zone depths.

  10. Bayesian Inference on Proportional Elections

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  11. Bayesian analyses of cognitive architecture.

    Science.gov (United States)

    Houpt, Joseph W; Heathcote, Andrew; Eidels, Ami

    2017-06-01

    The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  13. Integrating graph partitioning and matching for trajectory analysis in video surveillance.

    Science.gov (United States)

    Lin, Liang; Lu, Yongyi; Pan, Yan; Chen, Xiaowu

    2012-12-01

    In order to track moving objects in long range against occlusion, interruption, and background clutter, this paper proposes a unified approach for global trajectory analysis. Instead of the traditional frame-by-frame tracking, our method recovers target trajectories based on a short sequence of video frames, e.g., 15 frames. We initially calculate a foreground map at each frame obtained from a state-of-the-art background model. An attribute graph is then extracted from the foreground map, where the graph vertices are image primitives represented by the composite features. With this graph representation, we pose trajectory analysis as a joint task of spatial graph partitioning and temporal graph matching. The task can be formulated by maximizing a posteriori under the Bayesian framework, in which we integrate the spatio-temporal contexts and the appearance models. The probabilistic inference is achieved by a data-driven Markov chain Monte Carlo algorithm. Given a period of observed frames, the algorithm simulates an ergodic and aperiodic Markov chain, and it visits a sequence of solution states in the joint space of spatial graph partitioning and temporal graph matching. In the experiments, our method is tested on several challenging videos from the public datasets of visual surveillance, and it outperforms the state-of-the-art methods.

  14. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  15. Gluing Nekrasov Partition Functions

    Science.gov (United States)

    Qiu, Jian; Tizzano, Luigi; Winding, Jacob; Zabzine, Maxim

    2015-07-01

    In this paper we summarise the localisation calculation of 5D super Yang-Mills on simply connected toric Sasaki-Einstein (SE) manifolds. We show how various aspects of the computation, including the equivariant index, the asymptotic behaviour and the factorisation property are governed by the combinatorial data of the toric geometry. We prove that the perturbative partition function on a simply connected SE manifold corresponding to an n-gon toric diagram factorises to n copies of perturbative part (zero instanton sector) of the Nekrasov partition function. This leads us to conjecture a prescription for the computation of the complete partition function, by gluing n copies of the full Nekrasov partition functions. This work is a generalisation of some earlier computation carried out on Y p, q manifolds, whose moment map cone has a quadrangle base and our result is valid for manifolds whose moment map cones have pentagon base, hexagon base, etc. The algorithm we used for dealing with general cones may also be of independent interest.

  16. Goldbach Partitions and Sequences

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 11. Goldbach Partitions and Sequences. Subhash Kak. General Article Volume 19 Issue 11 November 2014 pp 1028-1037. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/019/11/1028-1037 ...

  17. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  18. Bayesian inference with information content model check for Langevin equations

    DEFF Research Database (Denmark)

    Krog, Jens F. C.; Lomholt, Michael Andersen

    2017-01-01

    The Bayesian data analysis framework has been proven to be a systematic and effective method of parameter inference and model selection for stochastic processes. In this work we introduce an information content model check which may serve as a goodness-of-fit, like the chi-square procedure......, to complement conventional Bayesian analysis. We demonstrate this extended Bayesian framework on a system of Langevin equations, where coordinate dependent mobilities and measurement noise hinder the normal mean squared displacement approach....

  19. Variational Bayesian Filtering

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2008-01-01

    Roč. 56, č. 10 (2008), s. 5020-5030 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian filtering * particle filtering * Variational Bayes Subject RIV: BC - Control Systems Theory Impact factor: 2.335, year: 2008 http://library.utia.cas.cz/separaty/2008/AS/smidl-variational bayesian filtering.pdf

  20. Bayesian Networks An Introduction

    CERN Document Server

    Koski, Timo

    2009-01-01

    Bayesian Networks: An Introduction provides a self-contained introduction to the theory and applications of Bayesian networks, a topic of interest and importance for statisticians, computer scientists and those involved in modelling complex data sets. The material has been extensively tested in classroom teaching and assumes a basic knowledge of probability, statistics and mathematics. All notions are carefully explained and feature exercises throughout. Features include:.: An introduction to Dirichlet Distribution, Exponential Families and their applications.; A detailed description of learni

  1. A Bayesian Mixed-Methods Analysis of Basic Psychological Needs Satisfaction through Outdoor Learning and Its Influence on Motivational Behavior in Science Class

    Science.gov (United States)

    Dettweiler, Ulrich; Lauterbach, Gabriele; Becker, Christoph; Simon, Perikles

    2017-01-01

    Research has shown that outdoor educational interventions can lead to students' increased self-regulated motivational behavior. In this study, we searched into the satisfaction of basic psychological needs (BPN), i.e., autonomy support, the learners' experience of competence, and relatedness, both within the peer group and with their teachers, through outdoor learning. From 2014 to 2016, n = 281 students attended “research weeks” at a Student Science Lab in the Alpine National Park Berchtesgaden (Germany). The program is a curriculum-based one-week residential course, centered on a 2-day research expedition. Both before and after the course, students completed a composite questionnaire addressing BPN-satisfaction and overall motivational behavior in relation to the Self-Determination Index (SDI). At the latter time-point, students also reported on their experiences during the intervention. Questionnaire data was analyzed using a set of Bayesian General Linear Models with random effects. Those quantitative measures have been complemented by and contextualized with a set of qualitative survey methods. The results showed that the basic psychological needs influence the motivational behavior in both contexts equally, however on different scale levels. The basic needs satisfaction in the outdoor context is decisively higher than indoors. Moreover, the increment of competence-experience from the school context to the hands-on outdoor program appears to have the biggest impact to students' increased intrinsic motivation during the intervention. Increased autonomy support, student-teacher relations, and student-student relations have much less or no influence on the overall difference of motivational behavior. Gender does not influence the results. The contextualization partly supports those results and provide further explanation for the students' increased self-regulation in the outdoors. They add some explanatory thrust to the argument that outdoor teaching, be it

  2. A Bayesian Mixed-Methods Analysis of Basic Psychological Needs Satisfaction through Outdoor Learning and Its Influence on Motivational Behavior in Science Class.

    Science.gov (United States)

    Dettweiler, Ulrich; Lauterbach, Gabriele; Becker, Christoph; Simon, Perikles

    2017-01-01

    Research has shown that outdoor educational interventions can lead to students' increased self-regulated motivational behavior. In this study, we searched into the satisfaction of basic psychological needs (BPN), i.e., autonomy support, the learners' experience of competence, and relatedness, both within the peer group and with their teachers, through outdoor learning. From 2014 to 2016, n = 281 students attended "research weeks" at a Student Science Lab in the Alpine National Park Berchtesgaden (Germany). The program is a curriculum-based one-week residential course, centered on a 2-day research expedition. Both before and after the course, students completed a composite questionnaire addressing BPN-satisfaction and overall motivational behavior in relation to the Self-Determination Index (SDI). At the latter time-point, students also reported on their experiences during the intervention. Questionnaire data was analyzed using a set of Bayesian General Linear Models with random effects. Those quantitative measures have been complemented by and contextualized with a set of qualitative survey methods. The results showed that the basic psychological needs influence the motivational behavior in both contexts equally, however on different scale levels. The basic needs satisfaction in the outdoor context is decisively higher than indoors. Moreover, the increment of competence-experience from the school context to the hands-on outdoor program appears to have the biggest impact to students' increased intrinsic motivation during the intervention. Increased autonomy support, student-teacher relations, and student-student relations have much less or no influence on the overall difference of motivational behavior. Gender does not influence the results. The contextualization partly supports those results and provide further explanation for the students' increased self-regulation in the outdoors. They add some explanatory thrust to the argument that outdoor teaching, be it

  3. A Bayesian Mixed-Methods Analysis of Basic Psychological Needs Satisfaction through Outdoor Learning and Its Influence on Motivational Behavior in Science Class

    Directory of Open Access Journals (Sweden)

    Ulrich Dettweiler

    2017-12-01

    Full Text Available Research has shown that outdoor educational interventions can lead to students' increased self-regulated motivational behavior. In this study, we searched into the satisfaction of basic psychological needs (BPN, i.e., autonomy support, the learners' experience of competence, and relatedness, both within the peer group and with their teachers, through outdoor learning. From 2014 to 2016, n = 281 students attended “research weeks” at a Student Science Lab in the Alpine National Park Berchtesgaden (Germany. The program is a curriculum-based one-week residential course, centered on a 2-day research expedition. Both before and after the course, students completed a composite questionnaire addressing BPN-satisfaction and overall motivational behavior in relation to the Self-Determination Index (SDI. At the latter time-point, students also reported on their experiences during the intervention. Questionnaire data was analyzed using a set of Bayesian General Linear Models with random effects. Those quantitative measures have been complemented by and contextualized with a set of qualitative survey methods. The results showed that the basic psychological needs influence the motivational behavior in both contexts equally, however on different scale levels. The basic needs satisfaction in the outdoor context is decisively higher than indoors. Moreover, the increment of competence-experience from the school context to the hands-on outdoor program appears to have the biggest impact to students' increased intrinsic motivation during the intervention. Increased autonomy support, student-teacher relations, and student-student relations have much less or no influence on the overall difference of motivational behavior. Gender does not influence the results. The contextualization partly supports those results and provide further explanation for the students' increased self-regulation in the outdoors. They add some explanatory thrust to the argument that outdoor

  4. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  5. An economic growth model based on financial credits distribution to the government economy priority sectors of each regency in Indonesia using hierarchical Bayesian method

    Science.gov (United States)

    Yasmirullah, Septia Devi Prihastuti; Iriawan, Nur; Sipayung, Feronika Rosalinda

    2017-11-01

    The success of regional economic establishment could be measured by economic growth. Since the Act No. 32 of 2004 has been implemented, unbalance economic among the regency in Indonesia is increasing. This condition is contrary different with the government goal to build society welfare through the economic activity development in each region. This research aims to examine economic growth through the distribution of bank credits to each Indonesia's regency. The data analyzed in this research is hierarchically structured data which follow normal distribution in first level. Two modeling approaches are employed in this research, a global-one level Bayesian approach and two-level hierarchical Bayesian approach. The result shows that hierarchical Bayesian has succeeded to demonstrate a better estimation than a global-one level Bayesian. It proves that the different economic growth in each province is significantly influenced by the variations of micro level characteristics in each province. These variations are significantly affected by cities and province characteristics in second level.

  6. The prediction of blood-tissue partitions, water-skin partitions and skin permeation for agrochemicals.

    Science.gov (United States)

    Abraham, Michael H; Gola, Joelle M R; Ibrahim, Adam; Acree, William E; Liu, Xiangli

    2014-07-01

    There is considerable interest in the blood-tissue distribution of agrochemicals, and a number of researchers have developed experimental methods for in vitro distribution. These methods involve the determination of saline-blood and saline-tissue partitions; not only are they indirect, but they do not yield the required in vivo distribution. The authors set out equations for gas-tissue and blood-tissue distribution, for partition from water into skin and for permeation from water through human skin. Together with Abraham descriptors for the agrochemicals, these equations can be used to predict values for all of these processes. The present predictions compare favourably with experimental in vivo blood-tissue distribution where available. The predictions require no more than simple arithmetic. The present method represents a much easier and much more economic way of estimating blood-tissue partitions than the method that uses saline-blood and saline-tissue partitions. It has the added advantages of yielding the required in vivo partitions and being easily extended to the prediction of partition of agrochemicals from water into skin and permeation from water through skin. © 2013 Society of Chemical Industry.

  7. Principal curve algorithms for partitioning high-dimensional data spaces.

    Science.gov (United States)

    Zhang, Junping; Wang, Xiaodan; Kruger, Uwe; Wang, Fei-Yue

    2011-03-01

    Most partitioning algorithms iteratively partition a space into cells that contain underlying linear or nonlinear structures using linear partitioning strategies. The compactness of each cell depends on how well the (locally) linear partitioning strategy approximates the intrinsic structure. To partition a compact structure for complex data in a nonlinear context, this paper proposes a nonlinear partition strategy. This is a principal curve tree (PC-tree), which is implemented iteratively. Given that a PC passes through the middle of the data distribution, it allows for partitioning based on the arc length of the PC. To enhance the partitioning of a given space, a residual version of the PC-tree algorithm is developed, denoted here as the principal component analysis tree (PCR-tree) algorithm. Because of its residual property, the PCR-tree can yield the intrinsic dimension of high-dimensional data. Comparisons presented in this paper confirm that the proposed PC-tree and PCR-tree approaches show a better performance than several other competing partitioning algorithms in terms of vector quantization error and nearest neighbor search. The comparison also shows that the proposed algorithms outperform competing linear methods in total average coverage which measures the nonlinear compactness of partitioning algorithms.

  8. Bayesian missing data problems EM, data augmentation and noniterative computation

    CERN Document Server

    Tan, Ming T; Ng, Kai Wang

    2009-01-01

    Bayesian Missing Data Problems: EM, Data Augmentation and Noniterative Computation presents solutions to missing data problems through explicit or noniterative sampling calculation of Bayesian posteriors. The methods are based on the inverse Bayes formulae discovered by one of the author in 1995. Applying the Bayesian approach to important real-world problems, the authors focus on exact numerical solutions, a conditional sampling approach via data augmentation, and a noniterative sampling approach via EM-type algorithms. After introducing the missing data problems, Bayesian approach, and poste

  9. Improving Transparency and Replication in Bayesian Statistics : The WAMBS-Checklist

    NARCIS (Netherlands)

    Depaoli, Sarah; van de Schoot, Rens

    2017-01-01

    Bayesian statistical methods are slowly creeping into all fields of science and are becoming ever more popular in applied research. Although it is very attractive to use Bayesian statistics, our personal experience has led us to believe that naively applying Bayesian methods can be dangerous for at

  10. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  11. Partitioning of unstructured meshes for load balancing

    International Nuclear Information System (INIS)

    Martin, O.C.; Otto, S.W.

    1994-01-01

    Many large-scale engineering and scientific calculations involve repeated updating of variables on an unstructured mesh. To do these types of computations on distributed memory parallel computers, it is necessary to partition the mesh among the processors so that the load balance is maximized and inter-processor communication time is minimized. This can be approximated by the problem, of partitioning a graph so as to obtain a minimum cut, a well-studied combinatorial optimization problem. Graph partitioning algorithms are discussed that give good but not necessarily optimum solutions. These algorithms include local search methods recursive spectral bisection, and more general purpose methods such as simulated annealing. It is shown that a general procedure enables to combine simulated annealing with Kernighan-Lin. The resulting algorithm is both very fast and extremely effective. (authors) 23 refs., 3 figs., 1 tab

  12. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig

    2015-09-18

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  13. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  14. Bayesian analysis of MEG visual evoked responses

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, D.M.; George, J.S.; Wood, C.C.

    1999-04-01

    The authors developed a method for analyzing neural electromagnetic data that allows probabilistic inferences to be drawn about regions of activation. The method involves the generation of a large number of possible solutions which both fir the data and prior expectations about the nature of probable solutions made explicit by a Bayesian formalism. In addition, they have introduced a model for the current distributions that produce MEG and (EEG) data that allows extended regions of activity, and can easily incorporate prior information such as anatomical constraints from MRI. To evaluate the feasibility and utility of the Bayesian approach with actual data, they analyzed MEG data from a visual evoked response experiment. They compared Bayesian analyses of MEG responses to visual stimuli in the left and right visual fields, in order to examine the sensitivity of the method to detect known features of human visual cortex organization. They also examined the changing pattern of cortical activation as a function of time.

  15. An Association-Oriented Partitioning Approach for Streaming Graph Query

    Directory of Open Access Journals (Sweden)

    Yun Hao

    2017-01-01

    Full Text Available The volumes of real-world graphs like knowledge graph are increasing rapidly, which makes streaming graph processing a hot research area. Processing graphs in streaming setting poses significant challenges from different perspectives, among which graph partitioning method plays a key role. Regarding graph query, a well-designed partitioning method is essential for achieving better performance. Existing offline graph partitioning methods often require full knowledge of the graph, which is not possible during streaming graph processing. In order to handle this problem, we propose an association-oriented streaming graph partitioning method named Assc. This approach first computes the rank values of vertices with a hybrid approximate PageRank algorithm. After splitting these vertices with an adapted variant affinity propagation algorithm, the process order on vertices in the sliding window can be determined. Finally, according to the level of these vertices and their association, the partition where the vertices should be distributed is decided. We compare its performance with a set of streaming graph partition methods and METIS, a widely adopted offline approach. The results show that our solution can partition graphs with hundreds of millions of vertices in streaming setting on a large collection of graph datasets and our approach outperforms other graph partitioning methods.

  16. Generalised twisted partition functions

    CERN Document Server

    Petkova, V B

    2001-01-01

    We consider the set of partition functions that result from the insertion of twist operators compatible with conformal invariance in a given 2D Conformal Field Theory (CFT). A consistency equation, which gives a classification of twists, is written and solved in particular cases. This generalises old results on twisted torus boundary conditions, gives a physical interpretation of Ocneanu's algebraic construction, and might offer a new route to the study of properties of CFT.

  17. BKP plane partitions

    International Nuclear Information System (INIS)

    Foda, Omar; Wheeler, Michael

    2007-01-01

    Using BKP neutral fermions, we derive a product expression for the generating function of volume-weighted plane partitions that satisfy two conditions. If we call a set of adjacent equal height-h columns, h > 0, an h-path, then 1. Every h-path can assume one of two possible colours. 2. There is a unique way to move along an h-path from any column to another

  18. BKP plane partitions

    Energy Technology Data Exchange (ETDEWEB)

    Foda, Omar; Wheeler, Michael [Department of Mathematics and Statistics, University of Melbourne, Parkville, Victoria 3010 (Australia)

    2007-01-15

    Using BKP neutral fermions, we derive a product expression for the generating function of volume-weighted plane partitions that satisfy two conditions. If we call a set of adjacent equal height-h columns, h > 0, an h-path, then 1. Every h-path can assume one of two possible colours. 2. There is a unique way to move along an h-path from any column to another.

  19. How few? Bayesian statistics in injury biomechanics.

    Science.gov (United States)

    Cutcliffe, Hattie C; Schmidt, Allison L; Lucas, Joseph E; Bass, Cameron R

    2012-10-01

    In injury biomechanics, there are currently no general a priori estimates of how few specimens are necessary to obtain sufficiently accurate injury risk curves for a given underlying distribution. Further, several methods are available for constructing these curves, and recent methods include Bayesian survival analysis. This study used statistical simulations to evaluate the fidelity of different injury risk methods using limited sample sizes across four different underlying distributions. Five risk curve techniques were evaluated, including Bayesian techniques. For the Bayesian analyses, various prior distributions were assessed, each incorporating more accurate information. Simulated subject injury and biomechanical input values were randomly sampled from each underlying distribution, and injury status was determined by comparing these values. Injury risk curves were developed for this data using each technique for various small sample sizes; for each, analyses on 2000 simulated data sets were performed. Resulting median predicted risk values and confidence intervals were compared with the underlying distributions. Across conditions, the standard and Bayesian survival analyses better represented the underlying distributions included in this study, especially for extreme (1, 10, and 90%) risk. This study demonstrates that the value of the Bayesian analysis is the use of informed priors. As the mean of the prior approaches the actual value, the sample size necessary for good reproduction of the underlying distribution with small confidence intervals can be as small as 2. This study provides estimates of confidence intervals and number of samples to allow the selection of the most appropriate sample sizes given known information.

  20. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  1. Bayesian Nonparametric Hidden Markov Models with application to the analysis of copy-number-variation in mammalian genomes.

    Science.gov (United States)

    Yau, C; Papaspiliopoulos, O; Roberts, G O; Holmes, C

    2011-01-01

    We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.

  2. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  3. Sequential Bayesian technique: An alternative approach for ...

    Indian Academy of Sciences (India)

    This paper proposes a sequential Bayesian approach similar to Kalman filter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with ...

  4. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  5. Classifying emotion in Twitter using Bayesian network

    Science.gov (United States)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  6. Power in Bayesian Mediation Analysis for Small Sample Research

    NARCIS (Netherlands)

    Miočević, M.; MacKinnon, David; Levy, Roy

    2017-01-01

    Bayesian methods have the potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This article compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product,

  7. An introduction to Bayesian statistics in health psychology

    NARCIS (Netherlands)

    Depaoli, Sarah; Rus, Holly; Clifton, James; van de Schoot, A.G.J.; Tiemensma, Jitske

    2017-01-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of Health Psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation

  8. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiangjiang [College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-08-01

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose a Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.

  9. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  10. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  11. Bayesian Decision Support

    Science.gov (United States)

    Berliner, M.

    2017-12-01

    Bayesian statistical decision theory offers a natural framework for decision-policy making in the presence of uncertainty. Key advantages of the approach include efficient incorporation of information and observations. However, in complicated settings it is very difficult, perhaps essentially impossible, to formalize the mathematical inputs needed in the approach. Nevertheless, using the approach as a template is useful for decision support; that is, organizing and communicating our analyses. Bayesian hierarchical modeling is valuable in quantifying and managing uncertainty such cases. I review some aspects of the idea emphasizing statistical model development and use in the context of sea-level rise.

  12. Stability analysis of neutral type neural networks with mixed time-varying delays using triple-integral and delay-partitioning methods.

    Science.gov (United States)

    Shi, Kaibo; Zhu, Hong; Zhong, Shouming; Zeng, Yong; Zhang, Yuping; Wang, Wenqin

    2015-09-01

    This paper investigates the asymptotical stability problem for a class of neutral type neural networks with mixed time-varying delays. The system not only has time-varying discrete delay, but also distributed delay, which has never been discussed in the previous literature. Firstly, improved stability criteria are derived by employing the more general delay partitioning approach and generalizing the famous Jensen inequality. Secondly, by constructing a newly augmented Lyapunov-Krasovskii functionals, some less conservative stability criteria are established in terms of linear matrix inequalities (LMIs). Finally, four numerical examples are given to illustrate the effectiveness and the advantage of the proposed main results. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Bayesian calibration for forensic age estimation.

    Science.gov (United States)

    Ferrante, Luigi; Skrami, Edlira; Gesuita, Rosaria; Cameriere, Roberto

    2015-05-10

    Forensic medicine is increasingly called upon to assess the age of individuals. Forensic age estimation is mostly required in relation to illegal immigration and identification of bodies or skeletal remains. A variety of age estimation methods are based on dental samples and use of regression models, where the age of an individual is predicted by morphological tooth changes that take place over time. From the medico-legal point of view, regression models, with age as the dependent random variable entail that age tends to be overestimated in the young and underestimated in the old. To overcome this bias, we describe a new full Bayesian calibration method (asymmetric Laplace Bayesian calibration) for forensic age estimation that uses asymmetric Laplace distribution as the probability model. The method was compared with three existing approaches (two Bayesian and a classical method) using simulated data. Although its accuracy was comparable with that of the other methods, the asymmetric Laplace Bayesian calibration appears to be significantly more reliable and robust in case of misspecification of the probability model. The proposed method was also applied to a real dataset of values of the pulp chamber of the right lower premolar measured on x-ray scans of individuals of known age. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Dimensionality reduction in Bayesian estimation algorithms

    Directory of Open Access Journals (Sweden)

    G. W. Petty

    2013-09-01

    Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  15. Dimensionality reduction in Bayesian estimation algorithms

    Science.gov (United States)

    Petty, G. W.

    2013-09-01

    An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M) of pseudochannels while also regularizing the background (geophysical plus instrument) noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals - whether Bayesian or not - lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  16. A default Bayesian hypothesis test for mediation.

    Science.gov (United States)

    Nuijten, Michèle B; Wetzels, Ruud; Matzke, Dora; Dolan, Conor V; Wagenmakers, Eric-Jan

    2015-03-01

    In order to quantify the relationship between multiple variables, researchers often carry out a mediation analysis. In such an analysis, a mediator (e.g., knowledge of a healthy diet) transmits the effect from an independent variable (e.g., classroom instruction on a healthy diet) to a dependent variable (e.g., consumption of fruits and vegetables). Almost all mediation analyses in psychology use frequentist estimation and hypothesis-testing techniques. A recent exception is Yuan and MacKinnon (Psychological Methods, 14, 301-322, 2009), who outlined a Bayesian parameter estimation procedure for mediation analysis. Here we complete the Bayesian alternative to frequentist mediation analysis by specifying a default Bayesian hypothesis test based on the Jeffreys-Zellner-Siow approach. We further extend this default Bayesian test by allowing a comparison to directional or one-sided alternatives, using Markov chain Monte Carlo techniques implemented in JAGS. All Bayesian tests are implemented in the R package BayesMed (Nuijten, Wetzels, Matzke, Dolan, & Wagenmakers, 2014).

  17. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  18. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  19. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  20. Bayesian modeling of unknown diseases for biosurveillance.

    Science.gov (United States)

    Shen, Yanna; Cooper, Gregory F

    2009-11-14

    This paper investigates Bayesian modeling of unknown causes of events in the context of disease-outbreak detection. We introduce a Bayesian approach that models and detects both (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A key contribution of this paper is that it introduces a Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has broad applicability in medical informatics, where the space of known causes of outcomes of interest is seldom complete.