WorldWideScience

Sample records for bayesian partition method

  1. A Bayesian partition method for detecting pleiotropic and epistatic eQTL modules.

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2010-01-01

    Full Text Available Studies of the relationship between DNA variation and gene expression variation, often referred to as "expression quantitative trait loci (eQTL mapping", have been conducted in many species and resulted in many significant findings. Because of the large number of genes and genetic markers in such analyses, it is extremely challenging to discover how a small number of eQTLs interact with each other to affect mRNA expression levels for a set of co-regulated genes. We present a Bayesian method to facilitate the task, in which co-expressed genes mapped to a common set of markers are treated as a module characterized by latent indicator variables. A Markov chain Monte Carlo algorithm is designed to search simultaneously for the module genes and their linked markers. We show by simulations that this method is more powerful for detecting true eQTLs and their target genes than traditional QTL mapping methods. We applied the procedure to a data set consisting of gene expression and genotypes for 112 segregants of S. cerevisiae. Our method identified modules containing genes mapped to previously reported eQTL hot spots, and dissected these large eQTL hot spots into several modules corresponding to possibly different biological functions or primary and secondary responses to regulatory perturbations. In addition, we identified nine modules associated with pairs of eQTLs, of which two have been previously reported. We demonstrated that one of the novel modules containing many daughter-cell expressed genes is regulated by AMN1 and BPH1. In conclusion, the Bayesian partition method which simultaneously considers all traits and all markers is more powerful for detecting both pleiotropic and epistatic effects based on both simulated and empirical data.

  2. Applied Bayesian Hierarchical Methods

    CERN Document Server

    Congdon, Peter D

    2010-01-01

    Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.

  3. Understanding disease processes by partitioned dynamic Bayesian networks.

    Science.gov (United States)

    Bueno, Marcos L P; Hommersom, Arjen; Lucas, Peter J F; Lappenschaar, Martijn; Janzing, Joost G E

    2016-06-01

    For many clinical problems in patients the underlying pathophysiological process changes in the course of time as a result of medical interventions. In model building for such problems, the typical scarcity of data in a clinical setting has been often compensated by utilizing time homogeneous models, such as dynamic Bayesian networks. As a consequence, the specificities of the underlying process are lost in the obtained models. In the current work, we propose the new concept of partitioned dynamic Bayesian networks to capture distribution regime changes, i.e. time non-homogeneity, benefiting from an intuitive and compact representation with the solid theoretical foundation of Bayesian network models. In order to balance specificity and simplicity in real-world scenarios, we propose a heuristic algorithm to search and learn these non-homogeneous models taking into account a preference for less complex models. An extensive set of experiments were ran, in which simulating experiments show that the heuristic algorithm was capable of constructing well-suited solutions, in terms of goodness of fit and statistical distance to the original distributions, in consonance with the underlying processes that generated data, whether it was homogeneous or non-homogeneous. Finally, a study case on psychotic depression was conducted using non-homogeneous models learned by the heuristic, leading to insightful answers for clinically relevant questions concerning the dynamics of this mental disorder.

  4. Predicting mTOR inhibitors with a classifier using recursive partitioning and Naive Bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Ling Wang

    Full Text Available BACKGROUND: Mammalian target of rapamycin (mTOR is a central controller of cell growth, proliferation, metabolism, and angiogenesis. Thus, there is a great deal of interest in developing clinical drugs based on mTOR. In this paper, in silico models based on multi-scaffolds were developed to predict mTOR inhibitors or non-inhibitors. METHODS: First 1,264 diverse compounds were collected and categorized as mTOR inhibitors and non-inhibitors. Two methods, recursive partitioning (RP and naïve Bayesian (NB, were used to build combinatorial classification models of mTOR inhibitors versus non-inhibitors using physicochemical descriptors, fingerprints, and atom center fragments (ACFs. RESULTS: A total of 253 models were constructed and the overall predictive accuracies of the best models were more than 90% for both the training set of 964 and the external test set of 300 diverse compounds. The scaffold hopping abilities of the best models were successfully evaluated through predicting 37 new recently published mTOR inhibitors. Compared with the best RP and Bayesian models, the classifier based on ACFs and Bayesian shows comparable or slightly better in performance and scaffold hopping abilities. A web server was developed based on the ACFs and Bayesian method (http://rcdd.sysu.edu.cn/mtor/. This web server can be used to predict whether a compound is an mTOR inhibitor or non-inhibitor online. CONCLUSION: In silico models were constructed to predict mTOR inhibitors using recursive partitioning and naïve Bayesian methods, and a web server (mTOR Predictor was also developed based on the best model results. Compound prediction or virtual screening can be carried out through our web server. Moreover, the favorable and unfavorable fragments for mTOR inhibitors obtained from Bayesian classifiers will be helpful for lead optimization or the design of new mTOR inhibitors.

  5. Bayesian Methods for Statistical Analysis

    OpenAIRE

    Puza, Borek

    2015-01-01

    Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...

  6. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  7. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  8. Deep Learning and Bayesian Methods

    Directory of Open Access Journals (Sweden)

    Prosper Harrison B.

    2017-01-01

    Full Text Available A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  9. Deep Learning and Bayesian Methods

    Science.gov (United States)

    Prosper, Harrison B.

    2017-03-01

    A revolution is underway in which deep neural networks are routinely used to solve diffcult problems such as face recognition and natural language understanding. Particle physicists have taken notice and have started to deploy these methods, achieving results that suggest a potentially significant shift in how data might be analyzed in the not too distant future. We discuss a few recent developments in the application of deep neural networks and then indulge in speculation about how such methods might be used to automate certain aspects of data analysis in particle physics. Next, the connection to Bayesian methods is discussed and the paper ends with thoughts on a significant practical issue, namely, how, from a Bayesian perspective, one might optimize the construction of deep neural networks.

  10. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  11. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  12. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  13. SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Ming HAN; Yuanyao DING

    2004-01-01

    This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.

  14. Innovative Bayesian and parsimony phylogeny of dung beetles (coleoptera, scarabaeidae, scarabaeinae) enhanced by ontology-based partitioning of morphological characters.

    Science.gov (United States)

    Tarasov, Sergei; Génier, François

    2015-01-01

    Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a

  15. Spatially Partitioned Embedded Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2013-10-30

    We study spatially partitioned embedded Runge--Kutta (SPERK) schemes for partial differential equations (PDEs), in which each of the component schemes is applied over a different part of the spatial domain. Such methods may be convenient for problems in which the smoothness of the solution or the magnitudes of the PDE coefficients vary strongly in space. We focus on embedded partitioned methods as they offer greater efficiency and avoid the order reduction that may occur in nonembedded schemes. We demonstrate that the lack of conservation in partitioned schemes can lead to nonphysical effects and propose conservative additive schemes based on partitioning the fluxes rather than the ordinary differential equations. A variety of SPERK schemes are presented, including an embedded pair suitable for the time evolution of fifth-order weighted nonoscillatory spatial discretizations. Numerical experiments are provided to support the theory.

  16. Bayesian simultaneous equation models for the analysis of energy intake and partitioning in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Kebreab, E

    2012-01-01

    developed, reflecting current knowledge about metabolic scaling and partial efficiencies of PD and LD rates, whereas flat non-informative priors were used for the reminder of the parameters. The experimental data analysed originate from a balance and respiration trial with 17 cross-bred pigs of three......ABSTRACT SUMMARY The objective of the current study was to develop Bayesian simultaneous equation models for modelling energy intake and partitioning in growing pigs. A key feature of the Bayesian approach is that parameters are assigned prior distributions, which may reflect the current state...... genders (barrows, boars and gilts) selected on the basis of similar birth weight. The pigs were fed four diets based on barley, wheat and soybean meal supplemented with crystalline amino acids to meet or exceed Danish nutrient requirement standards. Nutrient balances and gas exchanges were measured at c...

  17. Variational Bayesian Approximation methods for inverse problems

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2012-09-01

    Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.

  18. Bayesian Methods for Radiation Detection and Dosimetry

    CERN Document Server

    Groer, Peter G

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...

  19. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  20. Hessian PDF reweighting meets the Bayesian methods

    CERN Document Server

    Paukkunen, Hannu

    2014-01-01

    We discuss the Hessian PDF reweighting - a technique intended to estimate the effects that new measurements have on a set of PDFs. The method stems straightforwardly from considering new data in a usual $\\chi^2$-fit and it naturally incorporates also non-zero values for the tolerance, $\\Delta\\chi^2>1$. In comparison to the contemporary Bayesian reweighting techniques, there is no need to generate large ensembles of PDF Monte-Carlo replicas, and the observables need to be evaluated only with the central and the error sets of the original PDFs. In spite of the apparently rather different methodologies, we find that the Hessian and the Bayesian techniques are actually equivalent if the $\\Delta\\chi^2$ criterion is properly included to the Bayesian likelihood function that is a simple exponential.

  1. The first complete mitochondrial genome from Bostrychus genus (Bostrychus sinensis) and partitioned Bayesian analysis of Eleotridae fish phylogeny

    Indian Academy of Sciences (India)

    Tao Wei; Xiao Xiao Jin; Tian Jun Xu

    2013-08-01

    To understand the phylogenetic position of Bostrychus sinensis in Eleotridae and the phylogenetic relationships of the family, we determined the nucleotide sequence of the mitochondrial (mt) genome of Bostrychus sinensis. It is the first complete mitochondrial genome sequence of Bostrychus genus. The entire mtDNA sequence was 16508 bp in length with a standard set of 13 protein-coding genes, 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs) and a noncoding control region. The mitochondrial genome of B. sinensis had common features with those of other bony fishes with respect to gene arrangement, base composition, and tRNA structures. Phylogenetic hypotheses within Eleotridae fish have been controversial at the genus level. We used the mitochondrial cytochrome b (cytb) gene sequence to examine phylogenetic relationships of Eleotridae by using partitioned Bayesian method. When the specific models and parameter estimates were presumed for partitioning the total data, the harmonic mean –lnL was improved. The phylogenetic analysis supported the monophyly of Hypseleotris and Gobiomorphs. In addition, the Bostrychus were most closely related to Ophiocara, and the Philypnodon is also the sister to Microphlypnus, based on the current datasets. Further, extensive taxonomic sampling and more molecular information are needed to confirm the phylogenetic relationships in Eleotridae.

  2. BONNSAI: correlated stellar observables in Bayesian methods

    CERN Document Server

    Schneider, F R N; Fossati, L; Langer, N; de Koter, A

    2016-01-01

    In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code BONNSAI by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounte...

  3. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  4. Prior approval: the growth of Bayesian methods in psychology.

    Science.gov (United States)

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  5. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  6. A Bayesian method for pulsar template generation

    CERN Document Server

    Imgrund, M; Kramer, M; Lesch, H

    2015-01-01

    Extracting Times of Arrival from pulsar radio signals depends on the knowledge of the pulsars pulse profile and how this template is generated. We examine pulsar template generation with Bayesian methods. We will contrast the classical generation mechanism of averaging intensity profiles with a new approach based on Bayesian inference. We introduce the Bayesian measurement model imposed and derive the algorithm to reconstruct a "statistical template" out of noisy data. The properties of these "statistical templates" are analysed with simulated and real measurement data from PSR B1133+16. We explain how to put this new form of template to use in analysing secondary parameters of interest and give various examples: We implement a nonlinear filter for determining ToAs of pulsars. Applying this method to data from PSR J1713+0747 we derive ToAs self consistently, meaning all epochs were timed and we used the same epochs for template generation. While the average template contains fluctuations and noise as unavoida...

  7. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Directory of Open Access Journals (Sweden)

    Alexey Miroshnikov

    Full Text Available Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  8. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Science.gov (United States)

    Miroshnikov, Alexey; Conlon, Erin M

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  9. Recursive Partitioning Method on Competing Risk Outcomes

    Science.gov (United States)

    Xu, Wei; Che, Jiahua; Kong, Qin

    2016-01-01

    In some cancer clinical studies, researchers have interests to explore the risk factors associated with competing risk outcomes such as recurrence-free survival. We develop a novel recursive partitioning framework on competing risk data for both prognostic and predictive model constructions. We define specific splitting rules, pruning algorithm, and final tree selection algorithm for the competing risk tree models. This methodology is quite flexible that it can corporate both semiparametric method using Cox proportional hazards model and parametric competing risk model. Both prognostic and predictive tree models are developed to adjust for potential confounding factors. Extensive simulations show that our methods have well-controlled type I error and robust power performance. Finally, we apply both Cox proportional hazards model and flexible parametric model for prognostic tree development on a retrospective clinical study on oropharyngeal cancer patients. PMID:27486300

  10. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    Science.gov (United States)

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain.

  11. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Directory of Open Access Journals (Sweden)

    Alexander Tilley

    Full Text Available The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15N and (13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15N values and greater δ(13C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15N ≈ 2.7‰ and Δ(13C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions.

  12. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  13. Bayesian method for system reliability assessment of overlapping pass/fail data

    Institute of Scientific and Technical Information of China (English)

    Zhipeng Hao; Shengkui Zeng; Jianbin Guo

    2015-01-01

    For high reliability and long life systems, system pass/fail data are often rare. Integrating lower-level data, such as data drawn from the subsystem or component pass/fail testing, the Bayesian analysis can improve the precision of the system reli-ability assessment. If the multi-level pass/fail data are overlapping, one chal enging problem for the Bayesian analysis is to develop a likelihood function. Since the computation burden of the existing methods makes them infeasible for multi-component systems, this paper proposes an improved Bayesian approach for the system reliability assessment in light of overlapping data. This approach includes three steps: fristly searching for feasible paths based on the binary decision diagram, then screening feasible points based on space partition and constraint decomposition, and final y sim-plifying the likelihood function. An example of a satel ite rol ing control system demonstrates the feasibility and the efficiency of the proposed approach.

  14. Bayesian Network Enhanced with Structural Reliability Methods: Methodology

    OpenAIRE

    Straub, Daniel; Der Kiureghian, Armen

    2012-01-01

    We combine Bayesian networks (BNs) and structural reliability methods (SRMs) to create a new computational framework, termed enhanced Bayesian network (eBN), for reliability and risk analysis of engineering structures and infrastructure. BNs are efficient in representing and evaluating complex probabilistic dependence structures, as present in infrastructure and structural systems, and they facilitate Bayesian updating of the model when new information becomes available. On the other hand, SR...

  15. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  16. BAYESIAN DEMONSTRATION TEST METHOD WITH MIXED BETA DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    MING Zhimao; TAO Junyong; CHEN Xun; ZHANG Yunan

    2008-01-01

    A complex mechatronics system Bayesian plan of demonstration test is studied based on the mixed beta distribution. During product design and improvement various information is appropriately considered by introducing inheritance factor, moreover, the inheritance factor is thought as a random variable, and the Bayesian decision of the qualification test plan is obtained, and the correctness of a Bayesian model presented is verified. The results show that the quantity of the test is too conservative according to classical methods under small binomial samples. Although traditional Bayesian analysis can consider test information of related or similar products, it ignores differences between such products. The method has solved the above problem, furthermore, considering the requirement in many practical projects, the differences among this method, the classical method and Bayesian with beta distribution are compared according to the plan of reliability acceptance test.

  17. BONNSAI: correlated stellar observables in Bayesian methods

    Science.gov (United States)

    Schneider, F. R. N.; Castro, N.; Fossati, L.; Langer, N.; de Koter, A.

    2017-02-01

    In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code Bonnsai by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounted for even if information for them are not available in specific cases but are known in general. Because the new likelihood model is a better approximation of the data, the reliability and robustness of the inferred parameters are improved. We find that neglecting correlations biases the most likely values of inferred stellar parameters and affects the precision with which these parameters can be determined. The importance of these biases depends on the strength of the correlations and the uncertainties. For example, we apply our technique to massive OB stars, but emphasise that it is valid for any type of stars. For effective temperatures and surface gravities determined from atmosphere modelling, we find that masses can be underestimated on average by 0.5σ and mass uncertainties overestimated by a factor of about 2 when neglecting correlations. At the same time, the age precisions are underestimated over a wide range of stellar parameters. We conclude that

  18. A Bayesian method for microseismic source inversion

    Science.gov (United States)

    Pugh, D. J.; White, R. S.; Christie, P. A. F.

    2016-08-01

    Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability

  19. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  20. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  1. A new method for counting trees with vertex partition

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A direct and elementary method is provided in this paper for counting trees with vertex partition instead of recursion, generating function, functional equation, Lagrange inversion, and matrix methods used before.

  2. An Efficient Bayesian Iterative Method for Solving Linear Systems

    Institute of Scientific and Technical Information of China (English)

    Deng DING; Kin Sio FONG; Ka Hou CHAN

    2012-01-01

    This paper concerns with the statistical methods for solving general linear systems.After a brief review of Bayesian perspective for inverse problems,a new and efficient iterative method for general linear systems from a Bayesian perspective is proposed.The convergence of this iterative method is proved,and the corresponding error analysis is studied.Finally,numerical experiments are given to support the efficiency of this iterative method,and some conclusions are obtained.

  3. HEURISTIC DISCRETIZATION METHOD FOR BAYESIAN NETWORKS

    Directory of Open Access Journals (Sweden)

    Mariana D.C. Lima

    2014-01-01

    Full Text Available Bayesian Network (BN is a classification technique widely used in Artificial Intelligence. Its structure is a Direct Acyclic Graph (DAG used to model the association of categorical variables. However, in cases where the variables are numerical, a previous discretization is necessary. Discretization methods are usually based on a statistical approach using the data distribution, such as division by quartiles. In this article we present a discretization using a heuristic that identifies events called peak and valley. Genetic Algorithm was used to identify these events having the minimization of the error between the estimated average for BN and the actual value of the numeric variable output as the objective function. The BN has been modeled from a database of Bit’s Rate of Penetration of the Brazilian pre-salt layer with 5 numerical variables and one categorical variable, using the proposed discretization and the division of the data by the quartiles. The results show that the proposed heuristic discretization has higher accuracy than the quartiles discretization.

  4. Stochastic back analysis of permeability coefficient using generalized Bayesian method

    Institute of Scientific and Technical Information of China (English)

    Zheng Guilan; Wang Yuan; Wang Fei; Yang Jian

    2008-01-01

    Owing to the fact that the conventional deterministic back analysis of the permeability coefficient cannot reflect the uncertainties of parameters, including the hydraulic head at the boundary, the permeability coefficient and measured hydraulic head, a stochastic back analysis taking consideration of uncertainties of parameters was performed using the generalized Bayesian method. Based on the stochastic finite element method (SFEM) for a seepage field, the variable metric algorithm and the generalized Bayesian method, formulas for stochastic back analysis of the permeability coefficient were derived. A case study of seepage analysis of a sluice foundation was performed to illustrate the proposed method. The results indicate that, with the generalized Bayesian method that considers the uncertainties of measured hydraulic head, the permeability coefficient and the hydraulic head at the boundary, both the mean and standard deviation of the permeability coefficient can be obtained and the standard deviation is less than that obtained by the conventional Bayesian method. Therefore, the present method is valid and applicable.

  5. Dominant partition method. [based on a wave function formalism

    Science.gov (United States)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  6. Application of Bayesian Network Learning Methods to Land Resource Evaluation

    Institute of Scientific and Technical Information of China (English)

    HUANG Jiejun; HE Xiaorong; WAN Youchuan

    2006-01-01

    Bayesian network has a powerful ability for reasoning and semantic representation, which combined with qualitative analysis and quantitative analysis, with prior knowledge and observed data, and provides an effective way to deal with prediction, classification and clustering. Firstly, this paper presented an overview of Bayesian network and its characteristics, and discussed how to learn a Bayesian network structure from given data, and then constructed a Bayesian network model for land resource evaluation with expert knowledge and the dataset. The experimental results based on the test dataset are that evaluation accuracy is 87.5%, and Kappa index is 0.826. All these prove the method is feasible and efficient, and indicate that Bayesian network is a promising approach for land resource evaluation.

  7. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...

  8. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...

  9. Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology

    CERN Document Server

    Hortúa, Héctor J

    2014-01-01

    These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.

  10. Gait Partitioning Methods: A Systematic Review

    Science.gov (United States)

    Taborri, Juri; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo

    2016-01-01

    In the last years, gait phase partitioning has come to be a challenging research topic due to its impact on several applications related to gait technologies. A variety of sensors can be used to feed algorithms for gait phase partitioning, mainly classifiable as wearable or non-wearable. Among wearable sensors, footswitches or foot pressure insoles are generally considered as the gold standard; however, to overcome some inherent limitations of the former, inertial measurement units have become popular in recent decades. Valuable results have been achieved also though electromyography, electroneurography, and ultrasonic sensors. Non-wearable sensors, such as opto-electronic systems along with force platforms, remain the most accurate system to perform gait analysis in an indoor environment. In the present paper we identify, select, and categorize the available methodologies for gait phase detection, analyzing advantages and disadvantages of each solution. Finally, we comparatively examine the obtainable gait phase granularities, the usable computational methodologies and the optimal sensor placements on the targeted body segments. PMID:26751449

  11. Gait Partitioning Methods: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2016-01-01

    Full Text Available In the last years, gait phase partitioning has come to be a challenging research topic due to its impact on several applications related to gait technologies. A variety of sensors can be used to feed algorithms for gait phase partitioning, mainly classifiable as wearable or non-wearable. Among wearable sensors, footswitches or foot pressure insoles are generally considered as the gold standard; however, to overcome some inherent limitations of the former, inertial measurement units have become popular in recent decades. Valuable results have been achieved also though electromyography, electroneurography, and ultrasonic sensors. Non-wearable sensors, such as opto-electronic systems along with force platforms, remain the most accurate system to perform gait analysis in an indoor environment. In the present paper we identify, select, and categorize the available methodologies for gait phase detection, analyzing advantages and disadvantages of each solution. Finally, we comparatively examine the obtainable gait phase granularities, the usable computational methodologies and the optimal sensor placements on the targeted body segments.

  12. Estimating Tree Height-Diameter Models with the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    2014-01-01

    Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  13. A Modified Extended Bayesian Method for Parameter Estimation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.

  14. GALERKIN MESHLESS METHODS BASED ON PARTITION OF UNITY QUADRATURE

    Institute of Scientific and Technical Information of China (English)

    ZENG Qing-hong; LU De-tang

    2005-01-01

    Numerical quadrature is an important ingredient of Galerkin meshless methods. A new numerical quadrature technique, partition of unity quadrature (PUQ),for Galerkin meshless methods was presented. The technique is based on finite covering and partition of unity. There is no need to decompose the physical domain into small cell. It possesses remarkable integration accuracy. Using Element-free Galerkin methods as example, Galerkin meshless methods based on PUQ were studied in detail. Meshing is always not required in the procedure of constitution of approximate function or numerical quadrature, so Galerkin meshless methods based on PUQ are "truly"meshless methods.

  15. Bayesian methods for the design and analysis of noninferiority trials.

    Science.gov (United States)

    Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen

    2016-01-01

    The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.

  16. Approximation methods for the partition functions of anharmonic systems

    Energy Technology Data Exchange (ETDEWEB)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations.

  17. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  18. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Koning, A.J., E-mail: koning@nrg.eu

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  19. Application of an efficient Bayesian discretization method to biomedical data

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2011-07-01

    Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.

  20. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  1. A novel method for measuring polymer-water partition coefficients.

    Science.gov (United States)

    Zhu, Tengyi; Jafvert, Chad T; Fu, Dafang; Hu, Yue

    2015-11-01

    Low density polyethylene (LDPE) often is used as the sorbent material in passive sampling devices to estimate the average temporal chemical concentration in water bodies or sediment pore water. To calculate water phase chemical concentrations from LDPE concentrations accurately, it is necessary to know the LDPE-water partition coefficients (KPE-w) of the chemicals of interest. However, even moderately hydrophobic chemicals have large KPE-w values, making direct measurement experimentally difficult. In this study we evaluated a simple three phase system from which KPE-w can be determined easily and accurately. In the method, chemical equilibrium distribution between LDPE and a surfactant micelle pseudo-phase is measured, with the ratio of these concentrations equal to the LDPE-micelle partition coefficient (KPE-mic). By employing sufficient mass of polymer and surfactant (Brij 30), the mass of chemical in the water phase remains negligible, albeit in equilibrium. In parallel, the micelle-water partition coefficient (Kmic-w) is determined experimentally. KPE-w is the product of KPE-mic and Kmic-w. The method was applied to measure values of KPE-w for 17 polycyclic aromatic hydrocarbons, 37 polychlorinated biphenyls, and 9 polybrominated diphenylethers. These values were compared to literature values. Mass fraction-based chemical activity coefficients (γ) were determined in each phase and showed that for each chemical, the micelles and LDPE had nearly identical affinity.

  2. PARALLEL COMPOUND METHODS FOR SOLVING PARTITIONED STIFF SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li-rong Chen; De-gui Liu

    2001-01-01

    This paper deals with the solution of partitioned systems of nonlinear stiff differential equations. Given a differential system, the user may specify some equations to be stiff and others to be nonstiff. For the numerical solution of such a system Parallel Compound Methods(PCMs) are studied. Nonstiff equations are integrated by a parallel explicit RK method while a parallel Rosenbrock method is used for the stiff part of the system. Their order conditions, their convergence and their numerical stability are discussed,and the numerical tests are conducted on a personal computer and a parallel computer.

  3. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  4. Bayesian Methods for Analysis and Adaptive Scheduling of Exoplanet Observations

    CERN Document Server

    Loredo, Thomas J; Chernoff, David F; Clyde, Merlise A; Liu, Bin

    2011-01-01

    We describe work in progress by a collaboration of astronomers and statisticians developing a suite of Bayesian data analysis tools for extrasolar planet (exoplanet) detection, planetary orbit estimation, and adaptive scheduling of observations. Our work addresses analysis of stellar reflex motion data, where a planet is detected by observing the "wobble" of its host star as it responds to the gravitational tug of the orbiting planet. Newtonian mechanics specifies an analytical model for the resulting time series, but it is strongly nonlinear, yielding complex, multimodal likelihood functions; it is even more complex when multiple planets are present. The parameter spaces range in size from few-dimensional to dozens of dimensions, depending on the number of planets in the system, and the type of motion measured (line-of-sight velocity, or position on the sky). Since orbits are periodic, Bayesian generalizations of periodogram methods facilitate the analysis. This relies on the model being linearly separable, ...

  5. Bayesian analysis of the flutter margin method in aeroelasticity

    Science.gov (United States)

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-12-01

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis-Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the flutter speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. It will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.

  6. Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method

    Institute of Scientific and Technical Information of China (English)

    WANG Wei(王玮); CAI LianHong(蔡莲红)

    2003-01-01

    Bayesian approach is an important method in statistics. The Bayesian belief network is a powerful knowledge representation and reasoning tool under the conditions of uncertainty.It is a graphics model that encodes probabilistic relationships among variables of interest. In this paper, an approach to Bayesian network construction is given for discovering the Chinese linguistic parameter relationship in the corpus.

  7. Predicting Plasma Glucose From Interstitial Glucose Observations Using Bayesian Methods

    DEFF Research Database (Denmark)

    Hansen, Alexander Hildenbrand; Duun-Henriksen, Anne Katrine; Juhl, Rune

    2014-01-01

    glucose monitor (CGM) and for unknown physiological influences. Combined with prior knowledge about the measurement devices, this approach can be used to obtain a robust predictive model. A stochastic-differential-equation-based gray box (SDE-GB) model is formulated on the basis of an identifiable...... significant diffusion terms of the model are identified using likelihood ratio tests, yielding inclusion of σIsc, σGp, and σGsc . Second, estimates using maximum likelihood are obtained, but prediction capability is poor. Finally a Bayesian method is implemented. Using this method the identified models...

  8. Binary recursive partitioning: background, methods, and application to psychology.

    Science.gov (United States)

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  9. Bayesian Methods for Nonlinear System Identification of Civil Structures

    Directory of Open Access Journals (Sweden)

    Conte Joel P.

    2015-01-01

    Full Text Available This paper presents a new framework for the identification of mechanics-based nonlinear finite element (FE models of civil structures using Bayesian methods. In this approach, recursive Bayesian estimation methods are utilized to update an advanced nonlinear FE model of the structure using the input-output dynamic data recorded during an earthquake event. Capable of capturing the complex damage mechanisms and failure modes of the structural system, the updated nonlinear FE model can be used to evaluate the state of health of the structure after a damage-inducing event. To update the unknown time-invariant parameters of the FE model, three alternative stochastic filtering methods are used: the extended Kalman filter (EKF, the unscented Kalman filter (UKF, and the iterated extended Kalman filter (IEKF. For those estimation methods that require the computation of structural FE response sensitivities with respect to the unknown modeling parameters (EKF and IEKF, the accurate and computationally efficient direct differentiation method (DDM is used. A three-dimensional five-story two-by-one bay reinforced concrete (RC frame is used to illustrate the performance of the framework and compare the performance of the different filters in terms of convergence, accuracy, and robustness. Excellent estimation results are obtained with the UKF, EKF, and IEKF. Because of the analytical linearization used in the EKF and IEKF, abrupt and large jumps in the estimates of the modeling parameters are observed when using these filters. The UKF slightly outperforms the EKF and IEKF.

  10. Distance and extinction determination for APOGEE stars with Bayesian method

    CERN Document Server

    Wang, Jianling; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James

    2016-01-01

    Using a Bayesian technology we derived distances and extinctions for over 100,000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from 2MASS, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC (Rodrigues et al. 2014) and SAGA Catalogues (Casagrande et al. 2014). These comparisons covers four orders of magnitude in the distance scale from 0.02 kpc to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2% to +3.6%, and the dispersion ranges from 15% to 25%. The extinctions toward all stars are also derived and compared wi...

  11. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  12. Bayesian Monte Carlo method for nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Koning, A.J. [Nuclear Research and Consultancy Group NRG, P.O. Box 25, ZG Petten (Netherlands)

    2015-12-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)

  13. Bayesian Monte Carlo method for nuclear data evaluation

    Science.gov (United States)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  14. Clustering method based on data division and partition

    Institute of Scientific and Technical Information of China (English)

    卢志茂; 刘晨; 张春祥; 王蕾

    2014-01-01

    Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets (VLDS). In this work, a novel division and partition clustering method (DP) was proposed to solve the problem. DP cut the source data set into data blocks, and extracted the eigenvector for each data block to form the local feature set. The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector. Ultimately according to the global eigenvector, the data set was assigned by criterion of minimum distance. The experimental results show that it is more robust than the conventional clusterings. Characteristics of not sensitive to data dimensions, distribution and number of nature clustering make it have a wide range of applications in clustering VLDS.

  15. Bayesian structural equation modeling method for hierarchical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu

    2009-04-15

    A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.

  16. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  17. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  18. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  19. Element-Partition-Based Methods for Visualization of 3D Unstructured Grid Data

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Element-partition-based methods for visualization of 3D unstructured grid data are presented.First,partition schemes for common elements,including curvilinear tetrahedra,pentahedra,hexahedra,etc.,are given,so that complex elements can be divided into several rectilinear tetrahedra,and the visualization processes can be simplified.Then,a slice method for cloud map and an iso-surface method based on the partition schemes are described.

  20. On Bayesian methods of exploring qualitative interactions for targeted treatment.

    Science.gov (United States)

    Chen, Wei; Ghosh, Debashis; Raghunathan, Trivellore E; Norkin, Maxim; Sargent, Daniel J; Bepler, Gerold

    2012-12-10

    Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A.

  1. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  2. Insights on the Bayesian spectral density method for operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-01-01

    This paper presents a study on the Bayesian spectral density method for operational modal analysis. The method makes Bayesian inference of the modal properties by using the sample power spectral density (PSD) matrix averaged over independent sets of ambient data. In the typical case with a single set of data, it is divided into non-overlapping segments and they are assumed to be independent. This study is motivated by a recent paper that reveals a mathematical equivalence of the method with the Bayesian FFT method. The latter does not require averaging concepts or the independent segment assumption. This study shows that the equivalence does not hold in reality because the theoretical long data asymptotic distribution of the PSD matrix may not be valid. A single time history can be considered long for the Bayesian FFT method but not necessarily for the Bayesian PSD method, depending on the number of segments.

  3. Bayesian signal processing classical, modern, and particle filtering methods

    CERN Document Server

    Candy, James V

    2016-01-01

    This book aims to give readers a unified Bayesian treatment starting from the basics (Baye's rule) to the more advanced (Monte Carlo sampling), evolving to the next-generation model-based techniques (sequential Monte Carlo sampling). This next edition incorporates a new chapter on "Sequential Bayesian Detection," a new section on "Ensemble Kalman Filters" as well as an expansion of Case Studies that detail Bayesian solutions for a variety of applications. These studies illustrate Bayesian approaches to real-world problems incorporating detailed particle filter designs, adaptive particle filters and sequential Bayesian detectors. In addition to these major developments a variety of sections are expanded to "fill-in-the gaps" of the first edition. Here metrics for particle filter (PF) designs with emphasis on classical "sanity testing" lead to ensemble techniques as a basic requirement for performance analysis. The expansion of information theory metrics and their application to PF designs is fully developed an...

  4. Simple optimization method for partitioning purification of hydrogen networks

    Directory of Open Access Journals (Sweden)

    W.M. Shehata

    2015-03-01

    Full Text Available The Egyptian petroleum fuel market is increasing rapidly nowadays. These fuels must be in the standard specifications of the Egyptian General Petroleum Corporation (EGPC, which required lower sulfur gasoline and diesel fuels. So the fuels must be deep hydrotreated which resulted in increasing hydrogen (H2 consumption for deeper hydrotreating. Along with increased H2 consumption for deeper hydrotreating, additional H2 is needed for processing heavier and higher sulfur crude slates especially in hydrocracking process, in addition to hydrotreating unit, isomerization units and lubricant plants. Purification technology is used to increase the amount of recycled hydrogen. If the amount of recycled hydrogen is increased, the amount of hydrogen that is sent to the furnaces with the off gas will decrease. In this work, El Halwagi et al. (2003 and El Halwagi (2012 optimization methods which are used for recycle/reuse integration systems have been extended to be used in the partitioning purification of hydrogen networks to minimize the hydrogen consumption and the hydrogen discharge. An actual case study and two case studies from the literature are solved to illustrate the proposed method.

  5. Trends in epidemiology in the 21st century: time to adopt Bayesian methods

    Directory of Open Access Journals (Sweden)

    Edson Zangiacomi Martinez

    2014-04-01

    Full Text Available 2013 marked the 250th anniversary of the presentation of Bayes’ theorem by the philosopher Richard Price. Thomas Bayes was a figure little known in his own time, but in the 20th century the theorem that bears his name became widely used in many fields of research. The Bayes theorem is the basis of the so-called Bayesian methods, an approach to statistical inference that allows studies to incorporate prior knowledge about relevant data characteristics into statistical analysis. Nowadays, Bayesian methods are widely used in many different areas such as astronomy, economics, marketing, genetics, bioinformatics and social sciences. This study observed that a number of authors discussed recent advances in techniques and the advantages of Bayesian methods for the analysis of epidemiological data. This article presents an overview of Bayesian methods, their application to epidemiological research and the main areas of epidemiology which should benefit from the use of Bayesian methods in coming years.

  6. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    Science.gov (United States)

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified.

  7. Non-monophyly of most supraspecific taxa of calcareous sponges (Porifera, Calcarea) revealed by increased taxon sampling and partitioned Bayesian analysis of ribosomal DNA.

    Science.gov (United States)

    Dohrmann, Martin; Voigt, Oliver; Erpenbeck, Dirk; Wörheide, Gert

    2006-09-01

    Calcareous sponges (Porifera, Calcarea) play an important role for our understanding of early metazoan evolution, since several molecular studies suggested their closer relationship to Eumetazoa than to the other two sponge 'classes,' Demospongiae and Hexactinellida. The division of Calcarea into the subtaxa Calcinea and Calcaronea is well established by now, but their internal relationships remain largely unresolved. Here, we estimate phylogenetic relationships within Calcarea in a Bayesian framework, using full-length 18S and partial 28S ribosomal DNA sequences. Both genes were analyzed separately and in combination and were further partitioned by stem and loop regions, the former being modelled to take non-independence of paired sites into account. By substantially increasing taxon sampling, we show that most of the traditionally recognized supraspecific taxa within Calcinea and Calcaronea are not monophyletic, challenging the existing classification system, while monophyly of Calcinea and Calcaronea is again highly supported.

  8. Bayesian Method with Spatial Constraint for Retinal Vessel Segmentation

    Directory of Open Access Journals (Sweden)

    Zhiyong Xiao

    2013-01-01

    Full Text Available A Bayesian method with spatial constraint is proposed for vessel segmentation in retinal images. The proposed model makes the assumption that the posterior probability of each pixel is dependent on posterior probabilities of their neighboring pixels. An energy function is defined for the proposed model. By applying the modified level set approach to minimize the proposed energy function, we can identify blood vessels in the retinal image. Evaluation of the developed method is done on real retinal images which are from the DRIVE database and the STARE database. The performance is analyzed and compared to other published methods using a number of measures which include accuracy, sensitivity, and specificity. The proposed approach is proved to be effective on these two databases. The average accuracy, sensitivity, and specificity on the DRIVE database are 0.9529, 0.7513, and 0.9792, respectively, and for the STARE database 0.9476, 0.7147, and 0.9735, respectively. The performance is better than that of other vessel segmentation methods.

  9. A simple method of determination of partition coefficient for biologically active molecules.

    Science.gov (United States)

    Sersen, F

    1995-02-01

    A simple method is presented for the determination of partition coefficient of an effector between water environment and biological material, based on concentration-dependent effects. The method allows the determination of partition coefficients for biological objects such as algae, bacteria and other microorganisms.

  10. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  11. CEO emotional bias and dividend policy: Bayesian network method

    Directory of Open Access Journals (Sweden)

    Azouzi Mohamed Ali

    2012-10-01

    Full Text Available This paper assumes that managers, investors, or both behave irrationally. In addition, even though scholars have investigated behavioral irrationality from three angles, investor sentiment, investor biases and managerial biases, we focus on the relationship between one of the managerial biases, overconfidence and dividend policy. Previous research investigating the relationship between overconfidence and financial decisions has studied investment, financing decisions and firm values. However, there are only a few exceptions to examine how a managerial emotional bias (optimism, loss aversion and overconfidence affects dividend policies. This stream of research contends whether to distribute dividends or not depends on how managers perceive of the company’s future. I will use Bayesian network method to examine this relation. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some100 Tunisian executives. Our results have revealed that leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its dividend policy choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  12. Data partitions, Bayesian analysis and phylogeny of the zygomycetous fungal family Mortierellaceae, inferred from nuclear ribosomal DNA sequences.

    Directory of Open Access Journals (Sweden)

    Tamás Petkovits

    Full Text Available Although the fungal order Mortierellales constitutes one of the largest classical groups of Zygomycota, its phylogeny is poorly understood and no modern taxonomic revision is currently available. In the present study, 90 type and reference strains were used to infer a comprehensive phylogeny of Mortierellales from the sequence data of the complete ITS region and the LSU and SSU genes with a special attention to the monophyly of the genus Mortierella. Out of 15 alternative partitioning strategies compared on the basis of Bayes factors, the one with the highest number of partitions was found optimal (with mixture models yielding the best likelihood and tree length values, implying a higher complexity of evolutionary patterns in the ribosomal genes than generally recognized. Modeling the ITS1, 5.8S, and ITS2, loci separately improved model fit significantly as compared to treating all as one and the same partition. Further, within-partition mixture models suggests that not only the SSU, LSU and ITS regions evolve under qualitatively and/or quantitatively different constraints, but that significant heterogeneity can be found within these loci also. The phylogenetic analysis indicated that the genus Mortierella is paraphyletic with respect to the genera Dissophora, Gamsiella and Lobosporangium and the resulting phylogeny contradict previous, morphology-based sectional classification of Mortierella. Based on tree structure and phenotypic traits, we recognize 12 major clades, for which we attempt to summarize phenotypic similarities. M. longicollis is closely related to the outgroup taxon Rhizopus oryzae, suggesting that it belongs to the Mucorales. Our results demonstrate that traits used in previous classifications of the Mortierellales are highly homoplastic and that the Mortierellales is in a need of a reclassification, where new, phylogenetically informative phenotypic traits should be identified, with molecular phylogenies playing a decisive role.

  13. Molecular phylogeny of coleoid cephalopods (Mollusca: Cephalopoda) using a multigene approach; the effect of data partitioning on resolving phylogenies in a Bayesian framework.

    Science.gov (United States)

    Strugnell, Jan; Norman, Mark; Jackson, Jennifer; Drummond, Alexei J; Cooper, Alan

    2005-11-01

    The resolution of higher level phylogeny of the coleoid cephalopods (octopuses, squids, and cuttlefishes) has been hindered by homoplasy among morphological characters in conjunction with a very poor fossil record. Initial molecular studies, based primarily on small fragments of single mitochondrial genes, have produced little resolution of the deep relationships amongst coleoid cephalopod families. The present study investigated this issue using 3415 base pairs (bp) from three nuclear genes (octopine dehydrogenase, pax-6, and rhodopsin) and three mitochondrial genes (12S rDNA, 16S rDNA, and cytochrome oxidase I) from a total of 35 species (including representatives of each of the higher level taxa). Bayesian analyses were conducted on mitochondrial and nuclear genes separately and also all six genes together. Separate analyses were conducted with the data partitioned by gene, codon/rDNA, gene+codon/rDNA or not partitioned at all. In the majority of analyses partitioning the data by gene+codon was the appropriate model with partitioning by codon the second most selected model. In some instances the topology varied according to the model used. Relatively high posterior probabilities and high levels of congruence were present between the topologies resulting from the analysis of all Octopodiform (octopuses and vampire "squid") taxa for all six genes, and independently for the datasets of mitochondrial and nuclear genes. In contrast, the highest levels of resolution within the Decapodiformes (squids and cuttlefishes) resulted from analysis of nuclear genes alone. Different higher level Decapodiform topologies were obtained through the analysis of only the 1st+2nd codon positions of nuclear genes and of all three codon positions. It is notable that there is strong evidence of saturation among the 3rd codon positions within the Decapodiformes and this may contribute spurious signal. The results suggest that the Decapodiformes may have radiated earlier and/or had faster

  14. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    Science.gov (United States)

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea

    2014-11-01

    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  15. A general method to study equilibrium partitioning of macromolecules

    DEFF Research Database (Denmark)

    The distribution of macromolecules between a confined microscopic solution and a macroscopic bulk solution plays an important role in understanding separation processes such as Size Exclusion Chromatography (SEC). In this study, we have developed an efficient computational algorithm for obtaining...... of this dimension rather than Rg (radius of gyration) or Rh (hydrodynamic radius) gives a better universality in the plot of the partition coefficient as a function of the chain dimension relative to the pore size....

  16. Bayesian methods for the analysis of inequality constrained contingency tables.

    Science.gov (United States)

    Laudy, Olav; Hoijtink, Herbert

    2007-04-01

    A Bayesian methodology for the analysis of inequality constrained models for contingency tables is presented. The problem of interest lies in obtaining the estimates of functions of cell probabilities subject to inequality constraints, testing hypotheses and selection of the best model. Constraints on conditional cell probabilities and on local, global, continuation and cumulative odds ratios are discussed. A Gibbs sampler to obtain a discrete representation of the posterior distribution of the inequality constrained parameters is used. Using this discrete representation, the credibility regions of functions of cell probabilities can be constructed. Posterior model probabilities are used for model selection and hypotheses are tested using posterior predictive checks. The Bayesian methodology proposed is illustrated in two examples.

  17. PARTITION OF UNITY FINITE ELEMENT METHOD FOR SHORT WAVE PROPAGATION IN SOLIDS

    Institute of Scientific and Technical Information of China (English)

    LI Xi-kui; ZHOU Hao-yang

    2005-01-01

    A partition of unity finite element method for numerical simulation of short wave propagation in solids is presented. The finite element spaces were constructed by multiplying the standard isoparametric finite element shape functions, which form a partition of unity, with the local subspaces defined on the corresponding shape functions, which include a priori knowledge about the wave motion equation in trial spaces and approximately reproduce the highly oscillatory properties within a single element. Numerical examples demonstrate the performance of the proposed partition of unity finite element in both computational accuracy and efficiency.

  18. Gradient-based stochastic optimization methods in Bayesian experimental design

    OpenAIRE

    2012-01-01

    Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this context is the expected information gain in model parameters, which in general can only be estimated u...

  19. Understanding data better with Bayesian and global statistical methods

    CERN Document Server

    Press, W H

    1996-01-01

    To understand their data better, astronomers need to use statistical tools that are more advanced than traditional ``freshman lab'' statistics. As an illustration, the problem of combining apparently incompatible measurements of a quantity is presented from both the traditional, and a more sophisticated Bayesian, perspective. Explicit formulas are given for both treatments. Results are shown for the value of the Hubble Constant, and a 95% confidence interval of 66 < H0 < 82 (km/s/Mpc) is obtained.

  20. A probabilistic crack size quantification method using in-situ Lamb wave test and Bayesian updating

    Science.gov (United States)

    Yang, Jinsong; He, Jingjing; Guan, Xuefei; Wang, Dengjiang; Chen, Huipeng; Zhang, Weifang; Liu, Yongming

    2016-10-01

    This paper presents a new crack size quantification method based on in-situ Lamb wave testing and Bayesian method. The proposed method uses coupon test to develop a baseline quantification model between the crack size and damage sensitive features. In-situ Lamb wave testing data on actual structures are used to update the baseline model parameters using Bayesian method to achieve more accurate crack size predictions. To demonstrate the proposed method, Lamb wave testing on simple plates with artificial cracks of different sizes is performed using surface-bonded piezoelectric wafers, and the data are used to obtain the baseline model. Two damage sensitive features, namely, the phase change and normalized amplitude are identified using signal processing techniques and used in the model. To validate the effectiveness of the method, the damage data from an in-situ fatigue testing on a realistic lap-joint component are used to update the baseline model using Bayesian method.

  1. Algebraic method for exact solution of canonical partition function in nuclear multifragmentation

    CERN Document Server

    Parvan, A S

    2002-01-01

    An algebraic method for the exact recursion formula for the calculation of canonical partition function of non-interaction finite systems of particles obeying Bose-Einstein, Fermi-Dirac, Maxwell-Boltzmann statistics or parastatistics is derived. A new exactly solvable multifragmentation model with baryon and electric charge conservation laws is developed. Recursion relations for this model are presented that allow exact calculation of canonical partition function for any statistics.

  2. Bayesian network modeling method based on case reasoning for emergency decision-making

    Directory of Open Access Journals (Sweden)

    XU Lei

    2013-06-01

    Full Text Available Bayesian network has the abilities of probability expression, uncertainty management and multi-information fusion.It can support emergency decision-making, which can improve the efficiency of decision-making.Emergency decision-making is highly time sensitive, which requires shortening the Bayesian Network modeling time as far as possible.Traditional Bayesian network modeling methods are clearly unable to meet that requirement.Thus, a Bayesian network modeling method based on case reasoning for emergency decision-making is proposed.The method can obtain optional cases through case matching by the functions of similarity degree and deviation degree.Then,new Bayesian network can be built through case adjustment by case merging and pruning.An example is presented to illustrate and test the proposed method.The result shows that the method does not have a huge search space or need sample data.The only requirement is the collection of expert knowledge and historical case models.Compared with traditional methods, the proposed method can reuse historical case models, which can reduce the modeling time and improve the efficiency.

  3. Localized operator partitioning method for electronic excitation energies in the time-dependent density functional formalism

    CERN Document Server

    Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F

    2016-01-01

    We extend the localized operator partitioning method (LOPM) [J. Nagesh, A.F. Izmaylov, and P. Brumer, J. Chem. Phys. 142, 084114 (2015)] to the time-dependent density functional theory (TD-DFT) framework to partition molecular electronic energies of excited states in a rigorous manner. A molecular fragment is defined as a collection of atoms using Stratman-Scuseria-Frisch atomic partitioning. A numerically efficient scheme for evaluating the fragment excitation energy is derived employing a resolution of the identity to preserve standard one- and two-electron integrals in the final expressions. The utility of this partitioning approach is demonstrated by examining several excited states of two bichromophoric compounds: 9-((1-naphthyl)-methyl)-anthracene and 4-((2-naphthyl)-methyl)-benzaldehyde. The LOPM is found to provide nontrivial insights into the nature of electronic energy localization that are not accessible using simple density difference analysis.

  4. A method for partitioning cadmium bioaccumulated in small aquatic organisms

    Energy Technology Data Exchange (ETDEWEB)

    Siriwardena, S.N.; Rana, K.J.; Baird, D.J. [Univ. of Stirling (United Kingdom). Institute of Aquaculture

    1995-09-01

    A series of laboratory experiments was conducted to evaluate bioaccumulation and surface adsorption of aqueous cadmium (Cd) by sac-fry of the African tilapia Oreochromis niloticus. In the first experiment, the design consisted of two cadmium treatments: 15 {micro}g Cd{center_dot}L{sup {minus}1} in dilution water and a Cd-ethylenediaminetetraacetic acid (Cd-EDTA) complex at 15 {micro}m{center_dot}L{sup {minus}1}, and a water-only control. There were five replicates per treatment and 40 fish per replicate. It was found that EDTA significantly reduced the bioaccumulation of cadmium by tilapia sac-fry by 34%. Based on the results, a second experiment was conducted to evaluate four procedures: a no-rinse control; rinsing in EDTA; rinsing in distilled water; and rinsing in 5% nitric acid, for removing surface-bound Cd from exposed sac-fry. In this experiment, 30 fish in each of five replicates were exposed to 15 {micro}g Cd{center_dot}L{sup {minus}1} for 72 h, processed through the rinse procedures, and analyzed for total Cd. The EDTA rinse treatment significantly reduced (p<0.05) Cd concentrations of the exposed fish relative to those receiving no rinse. It was concluded that the EDTA rinse technique may be useful in studies evaluating the partitioning of surface-bound and accumulated cadmium in small aquatic organisms.

  5. Analyzing bioassay data using Bayesian methods -- A primer

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Schillaci, M.E.

    1997-10-16

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level k{sub {alpha}}-L{sub c}/{sigma}{sub 0}, where {sigma}{sub 0} is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times {sigma}{sub 0} rather than approximately two times {sigma}{sub 0}, as in classical statistics, would often seem a better choice for the decision level.

  6. Bayesian methods for the conformational classification of eight-membered rings

    DEFF Research Database (Denmark)

    Pérez, J.; Nolsøe, Kim; Kessler, M.;

    2005-01-01

    Two methods for the classification of eight-membered rings based on a Bayesian analysis are presented. The two methods share the same probabilistic model for the measurement of torsion angles, but while the first method uses the canonical forms of cyclooctane and, given an empirical sequence of e...

  7. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    CERN Document Server

    Nagesh, Jayashree; Brumer, Paul

    2014-01-01

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning we reformulate the method in a computationally e?cient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9-((1-naphthyl)-methyl)-anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH2-naphthyl groups as subsystems, and examine their electronic energies and populations for several excited states using Configuration Interaction Singles method. The implemented approach shows a wide variety o...

  8. An overview of component qualification using Bayesian statistics and energy methods.

    Energy Technology Data Exchange (ETDEWEB)

    Dohner, Jeffrey Lynn

    2011-09-01

    The below overview is designed to give the reader a limited understanding of Bayesian and Maximum Likelihood (MLE) estimation; a basic understanding of some of the mathematical tools to evaluate the quality of an estimation; an introduction to energy methods and a limited discussion of damage potential. This discussion then goes on to presented a limited presentation as to how energy methods and Bayesian estimation are used together to qualify components. Example problems with solutions have been supplied as a learning aid. Bold letters are used to represent random variables. Un-bolded letter represent deterministic values. A concluding section presents a discussion of attributes and concerns.

  9. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    Energy Technology Data Exchange (ETDEWEB)

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  10. Remarkable phylogenetic resolution of the most complex clade of Cyprinidae (Teleostei: Cypriniformes): a proof of concept of homology assessment and partitioning sequence data integrated with mixed model Bayesian analyses.

    Science.gov (United States)

    Tao, Wenjing; Mayden, Richard L; He, Shunping

    2013-03-01

    Despite many efforts to resolve evolutionary relationships among major clades of Cyprinidae, some nodes have been especially problematic and remain unresolved. In this study, we employ four nuclear gene fragments (3.3kb) to infer interrelationships of the Cyprinidae. A reconstruction of the phylogenetic relationships within the family using maximum parsimony, maximum likelihood, and Bayesian analyses is presented. Among the taxa within the monophyletic Cyprinidae, Rasborinae is the basal-most lineage; Cyprinine is sister to Leuciscine. The monophyly for the subfamilies Gobioninae, Leuciscinae and Acheilognathinae were resolved with high nodal support. Although our results do not completely resolve relationships within Cyprinidae, this study presents novel and significant findings having major implications for a highly diverse and enigmatic clade of East-Asian cyprinids. Within this monophyletic group five closely-related subgroups are identified. Tinca tinca, one of the most phylogenetically enigmatic genera in the family, is strongly supported as having evolutionary affinities with this East-Asian clade; an established yet remarkable association because of the natural variation in phenotypes and generalized ecological niches occupied by these taxa. Our results clearly argue that the choice of partitioning strategies has significant impacts on the phylogenetic reconstructions, especially when multiple genes are being considered. The most highly partitioned model (partitioned by codon positions within genes) extracts the strongest phylogenetic signals and performs better than any other partitioning schemes supported by the strongest 2Δln Bayes factor. Future studies should include higher levels of taxon sampling and partitioned, model-based analyses.

  11. Efficient Inversion in Underwater Acoustics with Analytic, Iterative and Sequential Bayesian Methods

    Science.gov (United States)

    2015-09-30

    Iterative and Sequential Bayesian Methods Zoi-Heleni Michalopoulou Department of Mathematical Sciences New Jersey Institute of Technology...exploiting (fully or partially) the physics of the propagation medium. Algorithms are designed for inversion via the extraction of features of the...statistical modeling. • Develop methods for passive localization and inversion of environmental parameters that select features of propagation that are

  12. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabalinejad, M.; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo s

  13. Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

    NARCIS (Netherlands)

    Dimitrakakis, C.; Filipe, J.; Fred, A.; Sharp, B.

    2010-01-01

    There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to ob

  14. OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES

    Institute of Scientific and Technical Information of China (English)

    Yun-qing Huang; Wei Li; Fang Su

    2006-01-01

    In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.

  15. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    Science.gov (United States)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  16. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.

    Science.gov (United States)

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the

  17. A Tractable Method for Measuring Nanomaterial Risk Using Bayesian Networks

    Science.gov (United States)

    Murphy, Finbarr; Sheehan, Barry; Mullins, Martin; Bouwmeester, Hans; Marvin, Hans J. P.; Bouzembrak, Yamine; Costa, Anna L.; Das, Rasel; Stone, Vicki; Tofail, Syed A. M.

    2016-11-01

    While control banding has been identified as a suitable framework for the evaluation and the determination of potential human health risks associated with exposure to nanomaterials (NMs), the approach currently lacks any implementation that enjoys widespread support. Large inconsistencies in characterisation data, toxicological measurements and exposure scenarios make it difficult to map and compare the risk associated with NMs based on physicochemical data, concentration and exposure route. Here we demonstrate the use of Bayesian networks as a reliable tool for NM risk estimation. This tool is tractable, accessible and scalable. Most importantly, it captures a broad span of data types, from complete, high quality data sets through to data sets with missing data and/or values with a relatively high spread of probability distribution. The tool is able to learn iteratively in order to further refine forecasts as the quality of data available improves. We demonstrate how this risk measurement approach works on NMs with varying degrees of risk potential, namely, carbon nanotubes, silver and titanium dioxide. The results afford even non-experts an accurate picture of the occupational risk probabilities associated with these NMs and, in doing so, demonstrated how NM risk can be evaluated into a tractable, quantitative risk comparator.

  18. Comparasion of prediction and measurement methods for sound insulation of lightweight partitions

    Directory of Open Access Journals (Sweden)

    Praščević Momir

    2012-01-01

    Full Text Available It is important to know the sound insulation of partitions in order to be able to compare different constructions, calculate acoustic comfort in apartments or noise levels from outdoor sources such as road traffic, and find engineer optimum solutions to noise problems. The use of lightweight partitions as party walls between dwellings has become common because sound insulation requirements can be achieved with low overall surface weights. However, they need greater skill to design and construct, because the overall design is much more complex. It is also more difficult to predict and measure of sound transmission loss of lightweight partitions. There are various methods for predicting and measuring sound insulation of partitions and some of them will be described in this paper. Also, this paper presents a comparison of experimental results of the sound insulation of lightweight partitions with results obtained using different theoretical models for single homogenous panels and double panels with and without acoustic absorption in the cavity between the panels. [Projekat Ministarstva nauke Republike Srbije, br. TR-37020: Development of methodology and means for noise protection from urban areas i br. III-43014: Improvement of the monitoring system and the assessment of a long-term population exposure to pollutant substances in the environment using neural networks

  19. AN INDOOR SPACE PARTITION METHOD AND ITS FINGERPRINT POSITIONING OPTIMIZATION CONSIDERING PEDESTRIAN ACCESSIBILITY

    Directory of Open Access Journals (Sweden)

    Y. Xu

    2016-06-01

    Full Text Available Fingerprint positioning method is generally the first choice in indoor navigation system due to its high accuracy and low cost. The accuracy depends on partition density to the indoor space. The accuracy will be higher with higher grid resolution. But the high grid resolution leads to significantly increasing work of the fingerprint data collection, processing and maintenance. This also might decrease the performance, portability and robustness of the navigation system. Meanwhile, traditional fingerprint positioning method use equational grid to partition the indoor space. While used for pedestrian navigation, sometimes a person can be located at the area where he or she cannot access. This paper studied these two issues, proposed a new indoor space partition method considering pedestrian accessibility, which can increase the accuracy of pedestrian position, and decrease the volume of the fingerprint data. Based on this proposed partition method, an optimized algorithm for fingerprint position was also designed. A across linker structure was used for fingerprint point index and matching. Experiment based on the proposed method and algorithm showed that the workload of fingerprint collection and maintenance were effectively decreased, and poisoning efficiency and accuracy was effectively increased

  20. An Indoor Space Partition Method and its Fingerprint Positioning Optimization Considering Pedestrian Accessibility

    Science.gov (United States)

    Xu, Yue; Shi, Yong; Zheng, Xingyu; Long, Yi

    2016-06-01

    Fingerprint positioning method is generally the first choice in indoor navigation system due to its high accuracy and low cost. The accuracy depends on partition density to the indoor space. The accuracy will be higher with higher grid resolution. But the high grid resolution leads to significantly increasing work of the fingerprint data collection, processing and maintenance. This also might decrease the performance, portability and robustness of the navigation system. Meanwhile, traditional fingerprint positioning method use equational grid to partition the indoor space. While used for pedestrian navigation, sometimes a person can be located at the area where he or she cannot access. This paper studied these two issues, proposed a new indoor space partition method considering pedestrian accessibility, which can increase the accuracy of pedestrian position, and decrease the volume of the fingerprint data. Based on this proposed partition method, an optimized algorithm for fingerprint position was also designed. A across linker structure was used for fingerprint point index and matching. Experiment based on the proposed method and algorithm showed that the workload of fingerprint collection and maintenance were effectively decreased, and poisoning efficiency and accuracy was effectively increased

  1. A passive dosing method to determine fugacity capacities and partitioning properties of leaves

    DEFF Research Database (Denmark)

    Bolinius, Damien Johann; Macleod, Matthew; McLachlan, Michael S.;

    2016-01-01

    The capacity of leaves to take up chemicals from the atmosphere and water influences how contaminants are transferred into food webs and soil. We provide a proof of concept of a passive dosing method to measure leaf/polydimethylsiloxane partition ratios (Kleaf/PDMS) for intact leaves, using...... polychlorinated biphenyls (PCBs) as model chemicals. Rhododendron leaves held in contact with PCB-loaded PDMS reached between 76 and 99% of equilibrium within 4 days for PCBs 3, 4, 28, 52, 101, 118, 138 and 180. Equilibrium Kleaf/PDMS extrapolated from the uptake kinetics measured over 4 days ranged from 0.......075 (PCB 180) to 0.371 (PCB 3). The Kleaf/PDMS data can readily be converted to fugacity capacities of leaves (Zleaf) and subsequently leaf/water or leaf/air partition ratios (Kleaf/water and Kleaf/air) using partitioning data from the literature. Results of our measurements are within the variability...

  2. Modelling access to renal transplantation waiting list in a French healthcare network using a Bayesian method.

    Science.gov (United States)

    Bayat, Sahar; Cuggia, Marc; Kessler, Michel; Briançon, Serge; Le Beux, Pierre; Frimat, Luc

    2008-01-01

    Evaluation of adult candidates for kidney transplantation diverges from one centre to another. Our purpose was to assess the suitability of Bayesian method for describing the factors associated to registration on the waiting list in a French healthcare network. We have found no published paper using Bayesian method in this domain. Eight hundred and nine patients starting renal replacement therapy were included in the analysis. The data were extracted from the information system of the healthcare network. We performed conventional statistical analysis and data mining analysis using mainly Bayesian networks. The Bayesian model showed that the probability of registration on the waiting list is associated to age, cardiovascular disease, diabetes, serum albumin level, respiratory disease, physical impairment, follow-up in the department performing transplantation and past history of malignancy. These results are similar to conventional statistical method. The comparison between conventional analysis and data mining analysis showed us the contribution of the data mining method for sorting variables and having a global view of the variables' associations. Moreover theses approaches constitute an essential step toward a decisional information system for healthcare networks.

  3. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...

  4. Surveillance system and method having an operating mode partitioned fault classification model

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  5. Landslide hazards mapping using uncertain Naïve Bayesian classification method

    Institute of Scientific and Technical Information of China (English)

    毛伊敏; 张茂省; 王根龙; 孙萍萍

    2015-01-01

    Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naïve Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naïve Bayesian classification method in Baota district of Yan’an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naïve Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves (AUC) in NBU and Naïve Bayesian algorithm are 87.29%and 82.47%respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.

  6. Method for chemical amplification based on fluid partitioning in an immiscible liquid

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    2017-02-28

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  7. The evaluation of the equilibrium partitioning method using sensitivity distributions of species in water and soil or sediment

    NARCIS (Netherlands)

    Beelen P van; Verbruggen EMJ; Peijnenburg WJGM; ECO

    2002-01-01

    The equilibrium partitioning method (EqP-method) can be used to derive environmental quality standards (like the Maximum Permissible Concentration or the intervention value) for soil or sediment, from aquatic toxicity data and a soil/water or sediment/water partitioning coefficient. The validity of

  8. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  9. A Bayesian method to estimate the neutron response matrix of a single crystal CVD diamond detector

    Energy Technology Data Exchange (ETDEWEB)

    Reginatto, Marcel; Araque, Jorge Guerrero; Nolte, Ralf; Zbořil, Miroslav; Zimbal, Andreas [Physikalisch-Technische Bundesanstalt, D-38116 Braunschweig (Germany); Gagnon-Moisan, Francis [Paul Scherrer Institut, CH-5232 Villigen (Switzerland)

    2015-01-13

    Detectors made from artificial chemical vapor deposition (CVD) single crystal diamond are very promising candidates for applications where high resolution neutron spectrometry in very high neutron fluxes is required, for example in fusion research. We propose a Bayesian method to estimate the neutron response function of the detector for a continuous range of neutron energies (in our case, 10 MeV ≤ E{sub n} ≤ 16 MeV) based on a few measurements with quasi-monoenergetic neutrons. This method is needed because a complete set of measurements is not available and the alternative approach of using responses based on Monte Carlo calculations is not feasible. Our approach uses Bayesian signal-background separation techniques and radial basis function interpolation methods. We present the analysis of data measured at the PTB accelerator facility PIAF. The method is quite general and it can be applied to other particle detectors with similar characteristics.

  10. Bayesian Computation Methods for Inferring Regulatory Network Models Using Biomedical Data.

    Science.gov (United States)

    Tian, Tianhai

    2016-01-01

    The rapid advancement of high-throughput technologies provides huge amounts of information for gene expression and protein activity in the genome-wide scale. The availability of genomics, transcriptomics, proteomics, and metabolomics dataset gives an unprecedented opportunity to study detailed molecular regulations that is very important to precision medicine. However, it is still a significant challenge to design effective and efficient method to infer the network structure and dynamic property of regulatory networks. In recent years a number of computing methods have been designed to explore the regulatory mechanisms as well as estimate unknown model parameters. Among them, the Bayesian inference method can combine both prior knowledge and experimental data to generate updated information regarding the regulatory mechanisms. This chapter gives a brief review for Bayesian statistical methods that are used to infer the network structure and estimate model parameters based on experimental data.

  11. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  12. ClogP(alk): a method for predicting alkane/water partition coefficient.

    Science.gov (United States)

    Kenny, Peter W; Montanari, Carlos A; Prokopczyk, Igor M

    2013-05-01

    Alkane/water partition coefficients (P(alk)) are less familiar to the molecular design community than their 1-octanol/water equivalents and access to both data and prediction tools is much more limited. A method for predicting alkane/water partition coefficient from molecular structure is introduced. The basis for the ClogP(alk) model is the strong (R² = 0.987) relationship between alkane/water partition coefficient and molecular surface area (MSA) that was observed for saturated hydrocarbons. The model treats a molecule as a perturbation of a saturated hydrocarbon molecule with the same MSA and uses increments defined for functional groups to quantify the extent to which logP(alk) is perturbed by the introduction each functional group. Interactions between functional groups, such as intramolecular hydrogen bonds are also parameterized within a perturbation framework. The functional groups and interactions between them are specified substructurally in a transparent and reproducible manner using SMARTS notation. The ClogP(alk) model was parameterized using data measured for structurally prototypical compounds that dominate the literature on alkane/water partition coefficients and then validated using an external test set of 100 alkane/water logP measurements, the majority of which were for drugs.

  13. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jin; Yu, Yaming [Department of Statistics, University of California, Irvine, Irvine, CA 92697-1250 (United States); Van Dyk, David A. [Statistics Section, Imperial College London, Huxley Building, South Kensington Campus, London SW7 2AZ (United Kingdom); Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Connors, Alanna; Meng, Xiao-Li, E-mail: jinx@uci.edu, E-mail: yamingy@ics.uci.edu, E-mail: dvandyk@imperial.ac.uk, E-mail: vkashyap@cfa.harvard.edu, E-mail: asiemiginowska@cfa.harvard.edu, E-mail: jdrake@cfa.harvard.edu, E-mail: pratzlaff@cfa.harvard.edu, E-mail: meng@stat.harvard.edu [Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138 (United States)

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  14. A Fully Bayesian Method for Jointly Fitting Instrumental Calibration and X-Ray Spectral Models

    Science.gov (United States)

    Xu, Jin; van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Connors, Alanna; Drake, Jeremy; Meng, Xiao-Li; Ratzlaff, Pete; Yu, Yaming

    2014-10-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is "pragmatic" in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  15. Estimating Steatosis Prevalence in Overweight and Obese Children: Comparison of Bayesian Small Area and Direct Methods

    Directory of Open Access Journals (Sweden)

    Hamid Reza Khalkhali

    2016-09-01

    Full Text Available Background Often, there is no access to sufficient sample size to estimate the prevalence using the method of direct estimator in all areas. The aim of this study was to compare small area’s Bayesian method and direct method in estimating the prevalence of steatosis in obese and overweight children. Materials and Methods: In this cross-sectional study, was conducted on 150 overweight and obese children aged 2 to 15 years referred to the Children's digestive clinic of Urmia University of Medical Sciences- Iran, in 2013. After Body mass index (BMI calculation, children with overweight and obese were assessed in terms of primary tests of obesity screening. Then children with steatosis confirmed by abdominal Ultrasonography, were referred to the laboratory for doing further tests. Steatosis prevalence was estimated by direct and Bayesian method and their efficiency were evaluated using mean-square error Jackknife method. The study data was analyzed using the open BUGS3.1.2 and R2.15.2 software. Results: The findings indicated that estimation of steatosis prevalence in children using Bayesian and direct methods were between 0.3098 to 0.493, and 0.355 to 0.560 respectively, in Health Districts; 0.3098 to 0.502, and 0.355 to 0.550 in Education Districts; 0.321 to 0.582, and 0.357 to 0.615 in age groups; 0.313 to 0.429, and 0.383 to 0.536 in sex groups. In general, according to the results, mean-square error of Bayesian estimation was smaller than direct estimation (P

  16. Comparison of Automated Continuous Flow Method With Shake- Flask Method in Determining Partition Coefficients of Bidentate Hydroxypyridinone Ligands

    Directory of Open Access Journals (Sweden)

    Lotfollah Saghaie

    2003-08-01

    Full Text Available The partition coefficients (Kpart , in octanol/water system of a range of bidentate ligands containing the 3-hydroxypyridin-4-one moiety were determined using shake flask and automated continuous flow methods (filter probe method. The shake flask method was used for extremely hydrophilic or hydrophobic compounds with a Kpart values greater than 100 and less than 0.01. For other ligands which possess moderate lipophilicity (Kpart values between 0.01-100 the filter probe method was used. Also the partition coefficient of four ligands with moderate lipophilicity was determined by shake flask method in order to check comparability of these two methods. While the shake flask method was able to determine either extremely hydrophilic or hydrophobic compounds efficiently, the filter probe method was unable to measure such Kpart values. Although, determination of the Kpart values of all compounds is possible with the classical shake-flask method, the procedure is time consuming. In contrast, the filter probe method offers many advantages over the traditional shake-flask method in terms of speed, efficiency of separation and degree of automation. The shake-flask method is the method of choice for determination of partition coefficients of extremely hydrophilic and hydrophobic ligands.

  17. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  18. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  19. A novel method to augment extraction of mangiferin by application of microwave on three phase partitioning

    Directory of Open Access Journals (Sweden)

    Vrushali M. Kulkarni

    2015-06-01

    Full Text Available This work reports a novel approach where three phase partitioning (TPP was combined with microwave for extraction of mangiferin from leaves of Mangifera indica. Soxhlet extraction was used as reference method, which yielded 57 mg/g in 5 h. Under optimal conditions such as microwave irradiation time 5 min, ammonium sulphate concentration 40% w/v, power 272 W, solute to solvent ratio 1:20, slurry to t-butanol ratio 1:1, soaking time 5 min and duty cycle 50%, the mangiferin yield obtained was 54 mg/g by microwave assisted three phase partitioning extraction (MTPP. Thus extraction method developed resulted into higher extraction yield in a shorter span, thereby making it an interesting alternative prior to down-stream processing.

  20. Using hierarchical Bayesian methods to examine the tools of decision-making

    Directory of Open Access Journals (Sweden)

    Michael D. Lee

    2011-12-01

    Full Text Available Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants. Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.

  1. Bayesian methods for model uncertainty analysis with application to future sea level rise

    Energy Technology Data Exchange (ETDEWEB)

    Patwardhan, A.; Small, M.J. (Carnegie Mellon Univ., Pittsburgh, PA (United States))

    1992-12-01

    This paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change. The method uses observations of output variables, made with an assumed error structure, to determine a posterior distribution of model outputs. This is used to derive a posterior distribution for the model parameters. Results demonstrate the resolution of the uncertainty that is obtained as a result of the Bayesian analysis and also indicate the key contributors to the uncertainty in the sea level rise model. While the technique is illustrated with a simple, preliminary model, the analysis provides an iterative framework for model refinement. The methodology developed in this paper provides a mechanism for the incorporation of ongoing data collection and research in decision-making for problems involving uncertain environmental change.

  2. Bayesian methods for model choice and propagation of model uncertainty in groundwater transport modeling

    Science.gov (United States)

    Mendes, B. S.; Draper, D.

    2008-12-01

    The issue of model uncertainty and model choice is central in any groundwater modeling effort [Neuman and Wierenga, 2003]; among the several approaches to the problem we favour using Bayesian statistics because it is a method that integrates in a natural way uncertainties (arising from any source) and experimental data. In this work, we experiment with several Bayesian approaches to model choice, focusing primarily on demonstrating the usefulness of the Reversible Jump Markov Chain Monte Carlo (RJMCMC) simulation method [Green, 1995]; this is an extension of the now- common MCMC methods. Standard MCMC techniques approximate posterior distributions for quantities of interest, often by creating a random walk in parameter space; RJMCMC allows the random walk to take place between parameter spaces with different dimensionalities. This fact allows us to explore state spaces that are associated with different deterministic models for experimental data. Our work is exploratory in nature; we restrict our study to comparing two simple transport models applied to a data set gathered to estimate the breakthrough curve for a tracer compound in groundwater. One model has a mean surface based on a simple advection dispersion differential equation; the second model's mean surface is also governed by a differential equation but in two dimensions. We focus on artificial data sets (in which truth is known) to see if model identification is done correctly, but we also address the issues of over and under-paramerization, and we compare RJMCMC's performance with other traditional methods for model selection and propagation of model uncertainty, including Bayesian model averaging, BIC and DIC.References Neuman and Wierenga (2003). A Comprehensive Strategy of Hydrogeologic Modeling and Uncertainty Analysis for Nuclear Facilities and Sites. NUREG/CR-6805, Division of Systems Analysis and Regulatory Effectiveness Office of Nuclear Regulatory Research, U. S. Nuclear Regulatory Commission

  3. A note on the robustness of a full Bayesian method for nonignorable missing data analysis

    OpenAIRE

    Zhang, Zhiyong; Wang,Lijuan

    2012-01-01

    A full Bayesian method utilizing data augmentation and Gibbs sampling algorithms is presented for analyzing nonignorable missing data. The discussion focuses on a simplified selection model for regression analysis. Regardless of missing mechanisms, it is assumed that missingness only depends on the missing variable itself. Simulation results demonstrate that the simplified selection model can recover regression model parameters under both correctly specified situations and many misspecified s...

  4. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    Energy Technology Data Exchange (ETDEWEB)

    Nagesh, Jayashree; Brumer, Paul [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Izmaylov, Artur F. [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Department of Physical and Environmental Sciences, University of Toronto, Scarborough, Toronto, Ontario M1C 1A4 (Canada)

    2015-02-28

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.

  5. C-Depth Method to Determine Diffusion Coefficient and Partition Coefficient of PCB in Building Materials.

    Science.gov (United States)

    Liu, Cong; Kolarik, Barbara; Gunnarsen, Lars; Zhang, Yinping

    2015-10-20

    Polychlorinated biphenyls (PCBs) have been found to be persistent in the environment and possibly harmful. Many buildings are characterized with high PCB concentrations. Knowledge about partitioning between primary sources and building materials is critical for exposure assessment and practical remediation of PCB contamination. This study develops a C-depth method to determine diffusion coefficient (D) and partition coefficient (K), two key parameters governing the partitioning process. For concrete, a primary material studied here, relative standard deviations of results among five data sets are 5%-22% for K and 42-66% for D. Compared with existing methods, C-depth method overcomes the inability to obtain unique estimation for nonlinear regression and does not require assumed correlations for D and K among congeners. Comparison with a more sophisticated two-term approach implies significant uncertainty for D, and smaller uncertainty for K. However, considering uncertainties associated with sampling and chemical analysis, and impact of environmental factors, the results are acceptable for engineering applications. This was supported by good agreement between model prediction and measurement. Sensitivity analysis indicated that effective diffusion distance, contacting time of materials with primary sources, and depth of measured concentrations are critical for determining D, and PCB concentration in primary sources is critical for K.

  6. Bayesian methods for multivariate modeling of pleiotropic SNP associations and genetic risk prediction

    Directory of Open Access Journals (Sweden)

    Stephen W Hartley

    2012-09-01

    Full Text Available Genome-wide association studies (GWAS have identified numerous associations between genetic loci and individual phenotypes; however, relatively few GWAS have attempted to detect pleiotropic associations, in which loci are simultaneously associated with multiple distinct phenotypes. We show that pleiotropic associations can be directly modeled via the construction of simple Bayesian networks, and that these models can be applied to produce single or ensembles of Bayesian classifiers that leverage pleiotropy to improve genetic risk prediction.The proposed method includes two phases: (1 Bayesian model comparison, to identify SNPs associated with one or more traits; and (2 cross validation feature selection, in which a final set of SNPs is selected to optimize prediction.To demonstrate the capabilities and limitations of the method, a total of 1600 case-control GWAS datasets with 2 dichotomous phenotypes were simulated under 16 scenarios, varying the association strengths of causal SNPs, the size of the discovery sets, the balance between cases and controls, and the number of pleiotropic causal SNPs.Across the 16 scenarios, prediction accuracy varied from 90% to 50%. In the 14 scenarios that included pleiotropically-associated SNPs, the pleiotropic model search and prediction methods consistently outperformed the naive model search and prediction. In the 2 scenarios in which there were no true pleiotropic SNPs, the differences between the pleiotropic and naive model searches were minimal.

  7. A generalized bayesian inference method for constraining the interiors of super Earths and sub-Neptunes

    CERN Document Server

    Dorn, C; Khan, A; Heng, K; Alibert, Y; Helled, R; Rivoldini, A; Benz, W

    2016-01-01

    We aim to present a generalized Bayesian inference method for constraining interiors of super Earths and sub-Neptunes. Our methodology succeeds in quantifying the degeneracy and correlation of structural parameters for high dimensional parameter spaces. Specifically, we identify what constraints can be placed on composition and thickness of core, mantle, ice, ocean, and atmospheric layers given observations of mass, radius, and bulk refractory abundance constraints (Fe, Mg, Si) from observations of the host star's photospheric composition. We employed a full probabilistic Bayesian inference analysis that formally accounts for observational and model uncertainties. Using a Markov chain Monte Carlo technique, we computed joint and marginal posterior probability distributions for all structural parameters of interest. We included state-of-the-art structural models based on self-consistent thermodynamics of core, mantle, high-pressure ice, and liquid water. Furthermore, we tested and compared two different atmosp...

  8. Bayesian Blocks, A New Method to Analyze Structure in Photon Counting Data

    CERN Document Server

    Scargle, J D

    1997-01-01

    I describe a new time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: time-tagged photon events (TTE), binned counts, or time-to-spill (TTS) data. The output is the most likely segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant -- i.e. has a fixed intensity without statistically significant variations. Since the analysis is based on Bayesian statistics, I call the resulting structures Bayesian Blocks. Unlike most, this method does not stipulate time bins -- instead the data themselves determine a piecewise constant representation. Therefore the analysis procedure itself does not impose a lower limit to the time scale on which variability can be detected. Locations, amplitudes, and rise and decay times of pulses within a time series can be estimated, independent of any pulse-shape model -- but only if they d...

  9. Finding roots of arbitrary high order polynomials based on neural network recursive partitioning method

    Institute of Scientific and Technical Information of China (English)

    HUANG Deshuang; CHI Zheru

    2004-01-01

    This paper proposes a novel recursive partitioning method based on constrained learning neural networks to find an arbitrary number (less than the order of the polynomial) of (real or complex) roots of arbitrary polynomials. Moreover, this paper also gives a BP network constrained learning algorithm (CLA) used in root-finders based on the constrained relations between the roots and the coefficients of polynomials. At the same time, an adaptive selection method for the parameter δPwith the CLA is also given.The experimental results demonstrate that this method can more rapidly and effectively obtain the roots of arbitrary high order polynomials with higher precision than traditional root-finding approaches.

  10. A method for Bayesian estimation of the probability of local intensity for some cities in Japan

    Directory of Open Access Journals (Sweden)

    G. C. Koravos

    2002-06-01

    Full Text Available Seismic hazard in terms of probability of exceedance of a given intensity in a given time span,was assessed for 12 sites in Japan.The method does not use any attenuation law.Instead,the dependence of local intensity on epicentral intensity I 0 is calculated directly from the data,using a Bayesian model.According to this model (Meroni et al., 1994,local intensity follows the binomial distribution with parameters (I 0 ,p .The parameter p is considered as a random variable following the Beta distribution.This manner of Bayesian estimates of p are assessed for various values of epicentral intensity and epicentral distance.In order to apply this model for the assessment of seismic hazard,the area under consideration is divided into seismic sources (zonesof known seismicity.The contribution of each source on the seismic hazard at every site is calculated according to the Bayesian model and the result is the combined effect of all the sources.High probabilities of exceedance were calculated for the sites that are in the central part of the country,with hazard decreasing slightly towards the north and the south parts.

  11. Bayesian methods

    OpenAIRE

    Bauwens, Luc; Korobilis, Dimitris

    2011-01-01

    This comprehensive Handbook presents the current state of art in the theory and methodology of macroeconomic data analysis. It is intended as a reference for graduate students and researchers interested in exploring new methodologies, but can also be employed as a graduate text. The Handbook concentrates on the most important issues, models and techniques for research in macroeconomics, and highlights the core methodologies and their empirical application in an accessible manner. Each chapter...

  12. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  13. Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

    CERN Document Server

    Dimitrakakis, Christos

    2009-01-01

    There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such algorithms and examines their complexity in this setting.

  14. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Bhagwat, Nikhil V. [Iowa State Univ., Ames, IA (United States)

    2005-01-01

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  15. Bayesian Rose Trees

    CERN Document Server

    Blundell, Charles; Heller, Katherine A

    2012-01-01

    Hierarchical structure is ubiquitous in data across many domains. There are many hier- archical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these meth- ods limit discoverable hierarchies to those with binary branching structure. This lim- itation, while computationally convenient, is often undesirable. In this paper we ex- plore a Bayesian hierarchical clustering algo- rithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy ag- glomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.

  16. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    2010-01-01

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Based on data from the Danish passenger railway operator DSB S-tog A/S, a solution method to the train driver recovery problem (TDRP) is developed. The TDRP is formulated as a set...... partitioning problem. We define a disruption neighbourhood by identifying a small set of drivers and train tasks directly affected by the disruption. Based on the disruption neighbourhood, the TDRP model is formed and solved. If the TDRP solution provides a feasible recovery for the drivers within...

  17. A Bayesian design space for analytical methods based on multivariate models and predictions.

    Science.gov (United States)

    Lebrun, Pierre; Boulanger, Bruno; Debrus, Benjamin; Lambert, Philippe; Hubert, Philippe

    2013-01-01

    The International Conference for Harmonization (ICH) has released regulatory guidelines for pharmaceutical development. In the document ICH Q8, the design space of a process is presented as the set of factor settings providing satisfactory results. However, ICH Q8 does not propose any practical methodology to define, derive, and compute design space. In parallel, in the last decades, it has been observed that the diversity and the quality of analytical methods have evolved exponentially, allowing substantial gains in selectivity and sensitivity. However, there is still a lack of a rationale toward the development of robust separation methods in a systematic way. Applying ICH Q8 to analytical methods provides a methodology for predicting a region of the space of factors in which results will be reliable. Combining design of experiments and Bayesian standard multivariate regression, an identified form of the predictive distribution of a new response vector has been identified and used, under noninformative as well as informative prior distributions of the parameters. From the responses and their predictive distribution, various critical quality attributes can be easily derived. This Bayesian framework was then extended to the multicriteria setting to estimate the predictive probability that several critical quality attributes will be jointly achieved in the future use of an analytical method. An example based on a high-performance liquid chromatography (HPLC) method is given. For this example, a constrained sampling scheme was applied to ensure the modeled responses have desirable properties.

  18. Bayesian methods for uncertainty factor application for derivation of reference values.

    Science.gov (United States)

    Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B

    2016-10-01

    In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology.

  19. Partition method for impact dynamics of flexible multibody systems based on contact constraint

    Institute of Scientific and Technical Information of China (English)

    段玥晨; 章定国; 洪嘉振

    2013-01-01

    The impact dynamics of a flexible multibody system is investigated. By using a partition method, the system is divided into two parts, the local impact region and the region away from the impact. The two parts are connected by specific boundary conditions, and the system after partition is equivalent to the original system. According to the rigid-flexible coupling dynamic theory of multibody system, system’s rigid-flexible coupling dynamic equations without impact are derived. A local impulse method for establishing the initial impact conditions is proposed. It satisfies the compatibility con-ditions for contact constraints and the actual physical situation of the impact process of flexible bodies. Based on the contact constraint method, system’s impact dynamic equa-tions are derived in a differential-algebraic form. The contact/separation criterion and the algorithm are given. An impact dynamic simulation is given. The results show that system’s dynamic behaviors including the energy, the deformations, the displacements, and the impact force during the impact process change dramatically. The impact makes great effects on the global dynamics of the system during and after impact.

  20. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    to utilize a small number of spatially clustered sets of voxels that are particularly suited for clinical interpretation. RVoxM automatically tunes all its free parameters during the training phase, and offers the additional advantage of producing probabilistic prediction outcomes. Experiments on age......This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed...... prediction from structural brain MRI indicate that RVoxM yields biologically meaningful models that provide excellent predictive accuracy....

  1. Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method.

    Science.gov (United States)

    Zhang, Guanglei; Liu, Fei; Liu, Jie; Luo, Jianwen; Xie, Yaoqin; Bai, Jing; Xing, Lei

    2017-01-01

    X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods.

  2. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  3. A Study of New Method for Weapon System Effectiveness Evaluation Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    YAN Dai-wei; GU Liang-xian; PAN Lei

    2008-01-01

    As weapon system effectiveness is affected by many factors, its evaluation is essentially a multi-criterion decision making problem for its complexity. The evaluation model of the effectiveness is established on the basis of metrics architecture of the effectiveness. The Bayesian network, which is used to evaluate the effectiveness, is established based on the metrics architecture and the evaluation models. For getting the weights of the metrics by Bayesian network, subjective initial values of the weights are given, gradient ascent algorithm is adopted, and the reasonable values of the weights are achieved. And then the effectiveness of every weapon system project is gained. The weapon system, whose effectiveness is relative maximum, is the optimization system. The research result shows that this method can solve the problem of AHP method which evaluation results are not compatible to the practice results and overcome the shortcoming of neural network in multilayer and multi-criterion decision. The method offers a new approaeh for evaluating the effectiveness.

  4. An Advanced Bayesian Method for Short-Term Probabilistic Forecasting of the Generation of Wind Power

    Directory of Open Access Journals (Sweden)

    Antonio Bracale

    2015-09-01

    Full Text Available Currently, among renewable distributed generation systems, wind generators are receiving a great deal of interest due to the great economic, technological, and environmental incentives they involve. However, the uncertainties due to the intermittent nature of wind energy make it difficult to operate electrical power systems optimally and make decisions that satisfy the needs of all the stakeholders of the electricity energy market. Thus, there is increasing interest determining how to forecast wind power production accurately. Most the methods that have been published in the relevant literature provided deterministic forecasts even though great interest has been focused recently on probabilistic forecast methods. In this paper, an advanced probabilistic method is proposed for short-term forecasting of wind power production. A mixture of two Weibull distributions was used as a probability function to model the uncertainties associated with wind speed. Then, a Bayesian inference approach with a particularly-effective, autoregressive, integrated, moving-average model was used to determine the parameters of the mixture Weibull distribution. Numerical applications also are presented to provide evidence of the forecasting performance of the Bayesian-based approach.

  5. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    Science.gov (United States)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  6. Overview of methods of reverse engineering of gene regulatory networks: Boolean and Bayesian networks

    Directory of Open Access Journals (Sweden)

    Frolova A. O.

    2012-06-01

    Full Text Available Reverse engineering of gene regulatory networks is an intensively studied topic in Systems Biology as it reconstructs regulatory interactions between all genes in the genome in the most complete form. The extreme computational complexity of this problem and lack of thorough reviews on reconstruction methods of gene regulatory network is a significant obstacle to further development of this area. In this article the two most common methods for modeling gene regulatory networks are surveyed: Boolean and Bayesian networks. The mathematical description of each method is given, as well as several algorithmic approaches to modeling gene networks using these methods; the complexity of algorithms and the problems that arise during its implementation are also noted.

  7. Estimation model of life insurance claims risk for cancer patients by using Bayesian method

    Science.gov (United States)

    Sukono; Suyudi, M.; Islamiyati, F.; Supian, S.

    2017-01-01

    This paper discussed the estimation model of the risk of life insurance claims for cancer patients using Bayesian method. To estimate the risk of the claim, the insurance participant data is grouped into two: the number of policies issued and the number of claims incurred. Model estimation is done using a Bayesian approach method. Further, the estimator model was used to estimate the risk value of life insurance claims each age group for each sex. The estimation results indicate that a large risk premium for insured males aged less than 30 years is 0.85; for ages 30 to 40 years is 3:58; for ages 41 to 50 years is 1.71; for ages 51 to 60 years is 2.96; and for those aged over 60 years is 7.82. Meanwhile, for insured women aged less than 30 years was 0:56; for ages 30 to 40 years is 3:21; for ages 41 to 50 years is 0.65; for ages 51 to 60 years is 3:12; and for those aged over 60 years is 9.99. This study is useful in determining the risk premium in homogeneous groups based on gender and age.

  8. OVarCall: Bayesian Mutation Calling Method Utilizing Overlapping Paired-End Reads.

    Science.gov (United States)

    Moriyama, Takuya; Shiraishi, Yuichi; Chiba, Kenichi; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-03-01

    Detection of somatic mutations from tumor and matched normal sequencing data has become a standard approach in cancer research. Although a number of mutation callers have been developed, it is still difficult to detect mutations with low allele frequency even in exome sequencing. We expect that overlapping paired-end read information is effective for this purpose, but no mutation caller has modeled overlapping information statistically in a proper form in exome sequence data. Here, we develop a Bayesian hierarchical method, OVar- Call (https://github.com/takumorizo/OVarCall), where overlapping paired-end read information improves the accuracy of low allele frequency mutation detection. Firstly, we construct two generative models: one is for reads with somatic variants generated from tumor cells and the other is for reads that does not have somatic variants but potentially includes sequence errors. Secondly, we calculate marginal likelihood for each model using a variational Bayesian algorithm to compute Bayes factor for the detection of somatic mutations. We empirically evaluated the performance of OVarCall and confirmed its better performance than other existing methods.

  9. Distinguishing real from fake ivory products by elemental analyses: A Bayesian hybrid classification method.

    Science.gov (United States)

    Buddhachat, Kittisak; Brown, Janine L; Thitaram, Chatchote; Klinhom, Sarisa; Nganvongpanit, Korakot

    2017-03-01

    As laws tighten to limit commercial ivory trading and protect threatened species like whales and elephants, increased sales of fake ivory products have become widespread. This study describes a method, handheld X-ray fluorescence (XRF) as a noninvasive technique for elemental analysis, to differentiate quickly between ivory (Asian and African elephant, mammoth) from non-ivory (bones, teeth, antler, horn, wood, synthetic resin, rock) materials. An equation consisting of 20 elements and light elements from a stepwise discriminant analysis was used to classify samples, followed by Bayesian binary regression to determine the probability of a sample being 'ivory', with complementary log log analysis to identify the best fit model for this purpose. This Bayesian hybrid classification model was 93% accurate with 92% precision in discriminating ivory from non-ivory materials. The method was then validated by scanning an additional ivory and non-ivory samples, correctly identifying bone as not ivory with >95% accuracy, except elephant bone, which was 72%. It was less accurate for wood and rock (25-85%); however, a preliminary screening to determine if samples are not Ca-dominant could eliminate inorganic materials. In conclusion, elemental analyses by XRF can be used to identify several forms of fake ivory samples, which could have forensic application.

  10. Photoacoustic discrimination of vascular and pigmented lesions using classical and Bayesian methods

    Science.gov (United States)

    Swearingen, Jennifer A.; Holan, Scott H.; Feldman, Mary M.; Viator, John A.

    2010-01-01

    Discrimination of pigmented and vascular lesions in skin can be difficult due to factors such as size, subungual location, and the nature of lesions containing both melanin and vascularity. Misdiagnosis may lead to precancerous or cancerous lesions not receiving proper medical care. To aid in the rapid and accurate diagnosis of such pathologies, we develop a photoacoustic system to determine the nature of skin lesions in vivo. By irradiating skin with two laser wavelengths, 422 and 530 nm, we induce photoacoustic responses, and the relative response at these two wavelengths indicates whether the lesion is pigmented or vascular. This response is due to the distinct absorption spectrum of melanin and hemoglobin. In particular, pigmented lesions have ratios of photoacoustic amplitudes of approximately 1.4 to 1 at the two wavelengths, while vascular lesions have ratios of about 4.0 to 1. Furthermore, we consider two statistical methods for conducting classification of lesions: standard multivariate analysis classification techniques and a Bayesian-model-based approach. We study 15 human subjects with eight vascular and seven pigmented lesions. Using the classical method, we achieve a perfect classification rate, while the Bayesian approach has an error rate of 20%.

  11. Model Diagnostics for Bayesian Networks

    Science.gov (United States)

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  12. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    Science.gov (United States)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  13. Development of a full automation solid phase microextraction method for investigating the partition coefficient of organic pollutant in complex sample.

    Science.gov (United States)

    Jiang, Ruifen; Lin, Wei; Wen, Sijia; Zhu, Fang; Luan, Tiangang; Ouyang, Gangfeng

    2015-08-07

    A fully automated solid phase microextraction (SPME) depletion method was developed to study the partition coefficient of organic compound between complex matrix and water sample. The SPME depletion process was conducted by pre-loading the fiber with a specific amount of organic compounds from a proposed standard gas generation vial, and then desorbing the fiber into the targeted samples. Based on the proposed method, the partition coefficients (Kmatrix) of 4 polyaromatic hydrocarbons (PAHs) between humic acid (HA)/hydroxypropyl-β-cyclodextrin (β-HPCD) and aqueous sample were determined. The results showed that the logKmatrix of 4 PAHs with HA and β-HPCD ranged from 3.19 to 4.08, and 2.45 to 3.15, respectively. In addition, the logKmatrix values decreased about 0.12-0.27 log units for different PAHs for every 10°C increase in temperature. The effect of temperature on the partition coefficient followed van't Hoff plot, and the partition coefficient at any temperature can be predicted based on the plot. Furthermore, the proposed method was applied for the real biological fluid analysis. The partition coefficients of 6 PAHs between the complex matrices in the fetal bovine serum and water were determined, and compared to ones obtained from SPME extraction method. The result demonstrated that the proposed method can be applied to determine the sorption coefficients of hydrophobic compounds between complex matrix and water in a variety of samples.

  14. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Directory of Open Access Journals (Sweden)

    Nazia Afreen

    2016-03-01

    Full Text Available Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  15. Hierarchical Bayesian methods for estimation of parameters in a longitudinal HIV dynamic system.

    Science.gov (United States)

    Huang, Yangxin; Liu, Dacheng; Wu, Hulin

    2006-06-01

    HIV dynamics studies have significantly contributed to the understanding of HIV infection and antiviral treatment strategies. But most studies are limited to short-term viral dynamics due to the difficulty of establishing a relationship of antiviral response with multiple treatment factors such as drug exposure and drug susceptibility during long-term treatment. In this article, a mechanism-based dynamic model is proposed for characterizing long-term viral dynamics with antiretroviral therapy, described by a set of nonlinear differential equations without closed-form solutions. In this model we directly incorporate drug concentration, adherence, and drug susceptibility into a function of treatment efficacy, defined as an inhibition rate of virus replication. We investigate a Bayesian approach under the framework of hierarchical Bayesian (mixed-effects) models for estimating unknown dynamic parameters. In particular, interest focuses on estimating individual dynamic parameters. The proposed methods not only help to alleviate the difficulty in parameter identifiability, but also flexibly deal with sparse and unbalanced longitudinal data from individual subjects. For illustration purposes, we present one simulation example to implement the proposed approach and apply the methodology to a data set from an AIDS clinical trial. The basic concept of the longitudinal HIV dynamic systems and the proposed methodologies are generally applicable to any other biomedical dynamic systems.

  16. A Bayesian method to incorporate hundreds of functional characteristics with association evidence to improve variant prioritization.

    Directory of Open Access Journals (Sweden)

    Sarah A Gagliano

    Full Text Available The increasing quantity and quality of functional genomic information motivate the assessment and integration of these data with association data, including data originating from genome-wide association studies (GWAS. We used previously described GWAS signals ("hits" to train a regularized logistic model in order to predict SNP causality on the basis of a large multivariate functional dataset. We show how this model can be used to derive Bayes factors for integrating functional and association data into a combined Bayesian analysis. Functional characteristics were obtained from the Encyclopedia of DNA Elements (ENCODE, from published expression quantitative trait loci (eQTL, and from other sources of genome-wide characteristics. We trained the model using all GWAS signals combined, and also using phenotype specific signals for autoimmune, brain-related, cancer, and cardiovascular disorders. The non-phenotype specific and the autoimmune GWAS signals gave the most reliable results. We found SNPs with higher probabilities of causality from functional characteristics showed an enrichment of more significant p-values compared to all GWAS SNPs in three large GWAS studies of complex traits. We investigated the ability of our Bayesian method to improve the identification of true causal signals in a psoriasis GWAS dataset and found that combining functional data with association data improves the ability to prioritise novel hits. We used the predictions from the penalized logistic regression model to calculate Bayes factors relating to functional characteristics and supply these online alongside resources to integrate these data with association data.

  17. bcrm: Bayesian Continual Reassessment Method Designs for Phase I Dose-Finding Trials

    Directory of Open Access Journals (Sweden)

    Michael Sweeting

    2013-09-01

    Full Text Available This paper presents the R package bcrm for conducting and assessing Bayesian continual reassessment method (CRM designs in Phase I dose-escalation trials. CRM designsare a class of adaptive design that select the dose to be given to the next recruited patient based on accumulating toxicity data from patients already recruited into the trial, often using Bayesian methodology. Despite the original CRM design being proposed in 1990, the methodology is still not widely implemented within oncology Phase I trials. The aim of this paper is to demonstrate, through example of the bcrm package, how a variety of possible designs can be easily implemented within the R statistical software, and how properties of the designs can be communicated to trial investigators using simple textual and graphical output obtained from the package. This in turn should facilitate an iterative process to allow a design to be chosen that is suitable to the needs of the investigator. Our bcrm package is the first to offer a large comprehensive choice of CRM designs, priors and escalation procedures, which can be easily compared and contrasted within the package through the assessment of operating characteristics.

  18. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Science.gov (United States)

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  19. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  20. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  1. Bayesian method for the analysis of the dust emission in the Far-Infrared and Submillimeter

    CERN Document Server

    Veneziani, M; Noriega-Crespo, A; Carey, S; Paladini, R; Paradis, D

    2013-01-01

    We present a method, based on Bayesian statistics, to fit the dust emission parameters in the far-infrared and submillimeter wavelengths. The method estimates the dust temperature and spectral emissivity index, plus their relationship, taking into account properly the statistical and systematic uncertainties. We test it on three sets of simulated sources detectable by the Herschel Space Observatory in the PACS and SPIRE spectral bands (70-500 micron), spanning over a wide range of dust temperatures. The simulated observations are a one-component Interstellar Medium, and two two-component sources, both warm (HII regions) and cold (cold clumps). We first define a procedure to identify the better model, then we recover the parameters of the model and measure their physical correlations by means of a Monte Carlo Markov Chain algorithm adopting multi-variate Gaussian priors. In this process we assess the reliability of the model recovery, and of parameters estimation. We conclude that the model and parameters are ...

  2. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    Science.gov (United States)

    Takamizawa, Hisashi; Itoh, Hiroto; Nishiyama, Yutaka

    2016-10-01

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased.

  3. A Fast Edge Preserving Bayesian Reconstruction Method for Parallel Imaging Applications in Cardiac MRI

    Science.gov (United States)

    Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi

    2010-01-01

    Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095

  4. Analytical Classification of Multimedia Index Structures by Using a Partitioning Method-Based Framework

    CERN Document Server

    keyvanpour, Mohammadreza

    2011-01-01

    Due to the advances in hardware technology and increase in production of multimedia data in many applications, during the last decades, multimedia databases have become increasingly important. Contentbased multimedia retrieval is one of an important research area in the field of multimedia databases. Lots of research on this field has led to proposition of different kinds of index structures to support fast and efficient similarity search to retrieve multimedia data from these databases. Due to variety and plenty of proposed index structures, we suggest a systematic framework based on partitioning method used in these structures to classify multimedia index structures, and then we evaluated these structures based on important functional measures. We hope this proposed framework will lead to empirical and technical comparison of multimedia index structures and development of more efficient structures at future.

  5. Fuzzy Partition Models for Fitting a Set of Partitions.

    Science.gov (United States)

    Gordon, A. D.; Vichi, M.

    2001-01-01

    Describes methods for fitting a fuzzy consensus partition to a set of partitions of the same set of objects. Describes and illustrates three models defining median partitions and compares these methods to an alternative approach to obtaining a consensus fuzzy partition. Discusses interesting differences in the results. (SLD)

  6. Differences between fully Bayesian and pragmatic methods to assess predictive uncertainty and optimal monitoring designs

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Gosses, Moritz; Nowak, Wolfgang

    2015-04-01

    Data acquisition for monitoring the state in different compartments of complex, coupled environmental systems is often time consuming and expensive. Therefore, experimental monitoring strategies are ideally designed such that most can be learned about the system at minimal costs. Bayesian methods for uncertainty quantification and optimal design (OD) of monitoring strategies are well suited to handle the non-linearity exhibited by most coupled environmental systems. However, their high computational demand restricts their applicability to models with comparatively low run-times. Therefore, pragmatic approaches have been used predominantly in the past where data worth and OD analyses have been restricted to linear or linearised problems and methods. Bayesian (nonlinear) and pragmatic (linear) OD approaches are founded on different assumptions and typically follow different steps in the modelling chain of 1) model calibration, 2) uncertainty quantification, and 3) optimal design analysis. The goal of this study is to follow through these steps for a Bayesian and a pragmatic approach and to discuss the impact of different assumptions (prior uncertainty), calibration strategies, and OD analysis methods on the proposed monitoring designs and their reliability to reduce predictive uncertainty. The OD framework PreDIA (Leube et al. 2012) is used for the nonlinear assessment with a conditional model ensemble obtained with Markov-chain Monte Carlo simulation representing the initial predictive uncertainty. PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. In the pragmatic OD approach, the parameter calibration was performed with a non-linear global search and the initial predictive uncertainty was estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. PREDUNC was also used for the linear OD analysis. We applied PreDIA and PREDUNC for uncertainty

  7. From "weight of evidence" to quantitative data integration using multicriteria decision analysis and Bayesian methods.

    Science.gov (United States)

    Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas

    2015-01-01

    "Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis.

  8. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    is formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...

  9. Partition of unity finite element method for quantum mechanical materials calculations

    CERN Document Server

    Pask, John E

    2016-01-01

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences and finite-elements have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative to PW: excessive number of degrees of freedom needed to achieve the required accuracies. We present a real-space partition of unity finite element (PUFE) method to solve the Kohn-Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution proc...

  10. Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods

    Energy Technology Data Exchange (ETDEWEB)

    Procaccia, H.; Villain, B. [Electricite de France (EDF), 93 - Saint-Denis (France); Clarotti, C.A. [ENEA, Casaccia (Italy)

    1996-12-31

    EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL`94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors). 10 refs.

  11. Bayesian methods for model uncertainty analysis with application to future sea level rise

    Energy Technology Data Exchange (ETDEWEB)

    Patwardhan, A.; Small, M.J.

    1992-01-01

    In no other area is the need for effective analysis of uncertainty more evident than in the problem of evaluating the consequences of increasing atmospheric concentrations of radiatively active gases. The major consequences of concern is global warming, with related environmental effects that include changes in local patterns of precipitation, soil moisture, forest and agricultural productivity, and a potential increase in global mean sea level. In order to identify an optimum set of responses to sea level change, a full characterization of the uncertainties associated with the predictions of future sea level rise is essential. The paper addresses the use of data for identifying and characterizing uncertainties in model parameters and predictions. The Bayesian Monte Carlo method is formally presented and elaborated, and applied to the analysis of the uncertainty in a predictive model for global mean sea level change.

  12. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  13. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  14. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    Science.gov (United States)

    Huff, Daniel W.

    Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time

  15. Liver segmentation in MRI: A fully automatic method based on stochastic partitions.

    Science.gov (United States)

    López-Mir, F; Naranjo, V; Angulo, J; Alcañiz, M; Luna, L

    2014-04-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marker-controlled algorithm. To improve accuracy of selected contours, the gradient of the original image is successfully enhanced by applying a new variant of stochastic watershed. Moreover, a final classifier is performed in order to obtain the final liver mask. Optimal parameters of the method are tuned using a training dataset and then they are applied to the rest of studies (17 datasets). The obtained results (a Jaccard coefficient of 0.91 ± 0.02) in comparison to other methods demonstrate that the new variant of stochastic watershed is a robust tool for automatic segmentation of the liver in MRI.

  16. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  17. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  18. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    Full Text Available In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method, for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  19. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Science.gov (United States)

    Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia

    2015-01-01

    In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  20. Estimation of octanol/water partition coefficient and aqueous solubility of environmental chemicals using molecular fingerprints and machine learning methods

    Science.gov (United States)

    Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

  1. Experimental Method Development for Estimating Solid-phase Diffusion Coefficients and Material/Air Partition Coefficients of SVOCs

    Science.gov (United States)

    The solid-phase diffusion coefficient (Dm) and material-air partition coefficient (Kma) are key parameters for characterizing the sources and transport of semivolatile organic compounds (SVOCs) in the indoor environment. In this work, a new experimental method was developed to es...

  2. Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases

    Institute of Scientific and Technical Information of China (English)

    郑建军; 刘玉树; 陈立潮

    2003-01-01

    The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.

  3. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  4. A generalized Bayesian inference method for constraining the interiors of super Earths and sub-Neptunes

    Science.gov (United States)

    Dorn, Caroline; Venturini, Julia; Khan, Amir; Heng, Kevin; Alibert, Yann; Helled, Ravit; Rivoldini, Attilio; Benz, Willy

    2017-01-01

    Aims: We aim to present a generalized Bayesian inference method for constraining interiors of super Earths and sub-Neptunes. Our methodology succeeds in quantifying the degeneracy and correlation of structural parameters for high dimensional parameter spaces. Specifically, we identify what constraints can be placed on composition and thickness of core, mantle, ice, ocean, and atmospheric layers given observations of mass, radius, and bulk refractory abundance constraints (Fe, Mg, Si) from observations of the host star's photospheric composition. Methods: We employed a full probabilistic Bayesian inference analysis that formally accounts for observational and model uncertainties. Using a Markov chain Monte Carlo technique, we computed joint and marginal posterior probability distributions for all structural parameters of interest. We included state-of-the-art structural models based on self-consistent thermodynamics of core, mantle, high-pressure ice, and liquid water. Furthermore, we tested and compared two different atmospheric models that are tailored for modeling thick and thin atmospheres, respectively. Results: First, we validate our method against Neptune. Second, we apply it to synthetic exoplanets of fixed mass and determine the effect on interior structure and composition when (1) radius; (2) atmospheric model; (3) data uncertainties; (4) semi-major axes; (5) atmospheric composition (i.e., a priori assumption of enriched envelopes versus pure H/He envelopes); and (6) prior distributions are varied. Conclusions: Our main conclusions are: (1) given available data, the range of possible interior structures is large; quantification of the degeneracy of possible interiors is therefore indispensable for meaningful planet characterization. (2) Our method predicts models that agree with independent estimates of Neptune's interior. (3) Increasing the precision in mass and radius leads to much improved constraints on ice mass fraction, size of rocky interior, but

  5. DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning

    KAUST Repository

    Soufan, Othman

    2016-11-10

    Background Mining high-throughput screening (HTS) assays is key for enhancing decisions in the area of drug repositioning and drug discovery. However, many challenges are encountered in the process of developing suitable and accurate methods for extracting useful information from these assays. Virtual screening and a wide variety of databases, methods and solutions proposed to-date, did not completely overcome these challenges. This study is based on a multi-label classification (MLC) technique for modeling correlations between several HTS assays, meaning that a single prediction represents a subset of assigned correlated labels instead of one label. Thus, the devised method provides an increased probability for more accurate predictions of compounds that were not tested in particular assays. Results Here we present DRABAL, a novel MLC solution that incorporates structure learning of a Bayesian network as a step to model dependency between the HTS assays. In this study, DRABAL was used to process more than 1.4 million interactions of over 400,000 compounds and analyze the existing relationships between five large HTS assays from the PubChem BioAssay Database. Compared to different MLC methods, DRABAL significantly improves the F1Score by about 22%, on average. We further illustrated usefulness and utility of DRABAL through screening FDA approved drugs and reported ones that have a high probability to interact with several targets, thus enabling drug-multi-target repositioning. Specifically DRABAL suggests the Thiabendazole drug as a common activator of the NCP1 and Rab-9A proteins, both of which are designed to identify treatment modalities for the Niemann–Pick type C disease. Conclusion We developed a novel MLC solution based on a Bayesian active learning framework to overcome the challenge of lacking fully labeled training data and exploit actual dependencies between the HTS assays. The solution is motivated by the need to model dependencies between existing

  6. Bayesian zero-failure reliability modeling and assessment method for multiple numerical control (NC) machine tools

    Institute of Scientific and Technical Information of China (English)

    阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲

    2016-01-01

    A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.

  7. Using Bayesian methods to predict climate impacts on groundwater availability and agricultural production in Punjab, India

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.

    2015-12-01

    Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.

  8. A multiple criteria-based spectral partitioning method for remotely sensed hyperspectral image classification

    Science.gov (United States)

    Liu, Yi; Li, Jun; Plaza, Antonio; Sun, Yanli

    2016-10-01

    Hyperspectral remote sensing offers a powerful tool in many different application contexts. The imbalance between the high dimensionality of the data and the limited availability of training samples calls for the need to perform dimensionality reduction in practice. Among traditional dimensionality reduction techniques, feature extraction is one of the most widely used approaches due to its flexibility to transform the original spectral information into a subspace. In turn, band selection is important when the application requires preserving the original spectral information (especially the physically meaningful information) for the interpretation of the hyperspectral scene. In the case of hyperspectral image classification, both techniques need to discard most of the original features/bands in order to perform the classification using a feature set with much lower dimensionality. However, the discriminative information that allows a classifier to provide good performance is usually classdependent and the relevant information may live in weak features/bands that are usually discarded or lost through subspace transformation or band selection. As a result, in practice, it is challenging to use either feature extraction or band selection for classification purposes. Relevant lines of attack to address this problem have focused on multiple feature selection aiming at a suitable fusion of diverse features in order to provide relevant information to the classifier. In this paper, we present a new dimensionality reduction technique, called multiple criteria-based spectral partitioning, which is embedded in an ensemble learning framework to perform advanced hyperspectral image classification. Driven by the use of a multiple band priority criteria that is derived from classic band selection techniques, we obtain multiple spectral partitions from the original hyperspectral data that correspond to several band subgroups with much lower spectral dimensionality as compared with

  9. Numerical modeling of undersea acoustics using a partition of unity method with plane waves enrichment

    Science.gov (United States)

    Hospital-Bravo, Raúl; Sarrate, Josep; Díez, Pedro

    2016-05-01

    A new 2D numerical model to predict the underwater acoustic propagation is obtained by exploring the potential of the Partition of Unity Method (PUM) enriched with plane waves. The aim of the work is to obtain sound pressure level distributions when multiple operational noise sources are present, in order to assess the acoustic impact over the marine fauna. The model takes advantage of the suitability of the PUM for solving the Helmholtz equation, especially for the practical case of large domains and medium frequencies. The seawater acoustic absorption and the acoustic reflectance of the sea surface and sea bottom are explicitly considered, and perfectly matched layers (PML) are placed at the lateral artificial boundaries to avoid spurious reflexions. The model includes semi-analytical integration rules which are adapted to highly oscillatory integrands with the aim of reducing the computational cost of the integration step. In addition, we develop a novel strategy to mitigate the ill-conditioning of the elemental and global system matrices. Specifically, we compute a low-rank approximation of the local space of solutions, which in turn reduces the number of degrees of freedom, the CPU time and the memory footprint. Numerical examples are presented to illustrate the capabilities of the model and to assess its accuracy.

  10. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  11. Bias in diet determination: incorporating traditional methods in Bayesian mixing models.

    Science.gov (United States)

    Franco-Trecu, Valentina; Drago, Massimiliano; Riet-Sapriza, Federico G; Parnell, Andrew; Frau, Rosina; Inchausti, Pablo

    2013-01-01

    There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs) resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition) that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors) and assess whether informative priors (SIMMs-IP) from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP) in the light of natural history of the predator species so as to reliably ascertain and

  12. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    Science.gov (United States)

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  13. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  14. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  15. Introduction to the Restoration of Astrophysical Images by Multiscale Transforms and Bayesian Methods

    Science.gov (United States)

    Bijaoui, A.

    2013-03-01

    The image restoration is today an important part of the astrophysical data analysis. The denoising and the deblurring can be efficiently performed using multiscale transforms. The multiresolution analysis constitutes the fundamental pillar for these transforms. The discrete wavelet transform is introduced from the theory of the approximation by translated functions. The continuous wavelet transform carries out a generalization of multiscale representations from translated and dilated wavelets. The à trous algorithm furnishes its discrete redundant transform. The image denoising is first considered without any hypothesis on the signal distribution, on the basis of the a contrario detection. Different softening functions are introduced. The introduction of a regularization constraint may improve the results. The application of Bayesian methods leads to an automated adaptation of the softening function to the signal distribution. The MAP principle leads to the basis pursuit, a sparse decomposition on redundant dictionaries. Nevertheless the posterior expectation minimizes, scale per scale, the quadratic error. The proposed deconvolution algorithm is based on a coupling of the wavelet denoising with an iterative inversion algorithm. The different methods are illustrated by numerical experiments on a simulated image similar to images of the deep sky. A white Gaussian stationary noise was added with three levels. In the conclusion different important connected problems are tackled.

  16. The Method of Oilfield Development Risk Forecasting and Early Warning Using Revised Bayesian Network

    Directory of Open Access Journals (Sweden)

    Yihua Zhong

    2016-01-01

    Full Text Available Oilfield development aiming at crude oil production is an extremely complex process, which involves many uncertain risk factors affecting oil output. Thus, risk prediction and early warning about oilfield development may insure operating and managing oilfields efficiently to meet the oil production plan of the country and sustainable development of oilfields. However, scholars and practitioners in the all world are seldom concerned with the risk problem of oilfield block development. The early warning index system of blocks development which includes the monitoring index and planning index was refined and formulated on the basis of researching and analyzing the theory of risk forecasting and early warning as well as the oilfield development. Based on the indexes of warning situation predicted by neural network, the method dividing the interval of warning degrees was presented by “3σ” rule; and a new method about forecasting and early warning of risk was proposed by introducing neural network to Bayesian networks. Case study shows that the results obtained in this paper are right and helpful to the management of oilfield development risk.

  17. A Bayesian Chance-Constrained Method for Hydraulic Barrier Design Under Model Structure Uncertainty

    Science.gov (United States)

    Chitsazan, N.; Pham, H. V.; Tsai, F. T. C.

    2014-12-01

    The groundwater community has widely recognized the model structure uncertainty as the major source of model uncertainty in groundwater modeling. Previous studies in the aquifer remediation design, however, rarely discuss the impact of the model structure uncertainty. This study combines the chance-constrained (CC) programming with the Bayesian model averaging (BMA) as a BMA-CC framework to assess the effect of model structure uncertainty in the remediation design. To investigate the impact of the model structure uncertainty on the remediation design, we compare the BMA-CC method with the traditional CC programming that only considers the model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from saltwater intrusion in the "1,500-foot" sand and the "1-700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address the model structure uncertainty, we develop three conceptual groundwater models based on three different hydrostratigraphy structures. The results show that using the traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from connector wells is higher than the total pumpage of the protected public supply wells. While reducing injection rate can be achieved by reducing reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station is not economically attractive.

  18. Overlapping community detection in weighted networks via a Bayesian approach

    Science.gov (United States)

    Chen, Yi; Wang, Xiaolong; Xiang, Xin; Tang, Buzhou; Chen, Qingcai; Fan, Shixi; Bu, Junzhao

    2017-02-01

    Complex networks as a powerful way to represent complex systems have been widely studied during the past several years. One of the most important tasks of complex network analysis is to detect communities embedded in networks. In the real world, weighted networks are very common and may contain overlapping communities where a node is allowed to belong to multiple communities. In this paper, we propose a novel Bayesian approach, called the Bayesian mixture network (BMN) model, to detect overlapping communities in weighted networks. The advantages of our method are (i) providing soft-partition solutions in weighted networks; (ii) providing soft memberships, which quantify 'how strongly' a node belongs to a community. Experiments on a large number of real and synthetic networks show that our model has the ability in detecting overlapping communities in weighted networks and is competitive with other state-of-the-art models at shedding light on community partition.

  19. Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.

    Science.gov (United States)

    Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias

    2015-04-01

    Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data.

  20. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  1. Bayesian approach to inverse statistical mechanics.

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  2. A novel Bayesian DNA motif comparison method for clustering and retrieval.

    Directory of Open Access Journals (Sweden)

    Naomi Habib

    2008-02-01

    Full Text Available Characterizing the DNA-binding specificities of transcription factors is a key problem in computational biology that has been addressed by multiple algorithms. These usually take as input sequences that are putatively bound by the same factor and output one or more DNA motifs. A common practice is to apply several such algorithms simultaneously to improve coverage at the price of redundancy. In interpreting such results, two tasks are crucial: clustering of redundant motifs, and attributing the motifs to transcription factors by retrieval of similar motifs from previously characterized motif libraries. Both tasks inherently involve motif comparison. Here we present a novel method for comparing and merging motifs, based on Bayesian probabilistic principles. This method takes into account both the similarity in positional nucleotide distributions of the two motifs and their dissimilarity to the background distribution. We demonstrate the use of the new comparison method as a basis for motif clustering and retrieval procedures, and compare it to several commonly used alternatives. Our results show that the new method outperforms other available methods in accuracy and sensitivity. We incorporated the resulting motif clustering and retrieval procedures in a large-scale automated pipeline for analyzing DNA motifs. This pipeline integrates the results of various DNA motif discovery algorithms and automatically merges redundant motifs from multiple training sets into a coherent annotated library of motifs. Application of this pipeline to recent genome-wide transcription factor location data in S. cerevisiae successfully identified DNA motifs in a manner that is as good as semi-automated analysis reported in the literature. Moreover, we show how this analysis elucidates the mechanisms of condition-specific preferences of transcription factors.

  3. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    Science.gov (United States)

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  4. Uncertainty analysis of strain modal parameters by Bayesian method using frequency response function

    Institute of Scientific and Technical Information of China (English)

    Xu Li; Yi Weijian; Zhihua Yi

    2007-01-01

    Structural strain modes are able to detect changes in local structural performance, but errors are inevitably intermixed in the measured data. In this paper, strain modal parameters are considered as random variables, and their uncertainty is analyzed by a Bayesian method based on the structural frequency response function (FRF). The estimates of strain modal parameters with maximal posterior probability are determined. Several independent measurements of the FRF of a four-story reinforced concrete frame structural model were performed in the laboratory. The ability to identify the stiffness change in a concrete column using the strain mode was verified. It is shown that the uncertainty of the natural frequency is very small. Compared with the displacement mode shape, the variations of strain mode shapes at each point are quite different. The damping ratios are more affected by the types of test systems. Except for the case where a high order strain mode does not identify local damage, the first order strain mode can provide an exact indication of the damage location.

  5. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    Science.gov (United States)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  6. Alternative method of highway traffic safety analysis for developing countries using delphi technique and Bayesian network.

    Science.gov (United States)

    Mbakwe, Anthony C; Saka, Anthony A; Choi, Keechoo; Lee, Young-Jae

    2016-08-01

    Highway traffic accidents all over the world result in more than 1.3 million fatalities annually. An alarming number of these fatalities occurs in developing countries. There are many risk factors that are associated with frequent accidents, heavy loss of lives, and property damage in developing countries. Unfortunately, poor record keeping practices are very difficult obstacle to overcome in striving to obtain a near accurate casualty and safety data. In light of the fact that there are numerous accident causes, any attempts to curb the escalating death and injury rates in developing countries must include the identification of the primary accident causes. This paper, therefore, seeks to show that the Delphi Technique is a suitable alternative method that can be exploited in generating highway traffic accident data through which the major accident causes can be identified. In order to authenticate the technique used, Korea, a country that underwent similar problems when it was in its early stages of development in addition to the availability of excellent highway safety records in its database, is chosen and utilized for this purpose. Validation of the methodology confirms the technique is suitable for application in developing countries. Furthermore, the Delphi Technique, in combination with the Bayesian Network Model, is utilized in modeling highway traffic accidents and forecasting accident rates in the countries of research.

  7. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  8. Modeling attainment of steady state of drug concentration in plasma by means of a Bayesian approach using MCMC methods.

    Science.gov (United States)

    Jordan, Paul; Brunschwig, Hadassa; Luedin, Eric

    2008-01-01

    The approach of Bayesian mixed effects modeling is an appropriate method for estimating both population-specific as well as subject-specific times to steady state. In addition to pure estimation, the approach allows to determine the time until a certain fraction of individuals of a population has reached steady state with a pre-specified certainty. In this paper a mixed effects model for the parameters of a nonlinear pharmacokinetic model is used within a Bayesian framework. Model fitting by means of Markov Chain Monte Carlo methods as implemented in the Gibbs sampler as well as the extraction of estimates and probability statements of interest are described. Finally, the proposed approach is illustrated by application to trough data from a multiple dose clinical trial.

  9. Simple Method to Determine the Partition Coefficient of Naphthenic Acid in Oil/Water

    DEFF Research Database (Denmark)

    Bitsch-Larsen, Anders; Andersen, Simon Ivar

    2008-01-01

    The partition coefficient for technical grade naphthenic acid in water/n-decane at 295 K has been determined (K-wo = 2.1 center dot 10(-4)) using a simple experimental technique with large extraction volumes (0.09 m(3) of water). Furthermore, nonequilibrium values at different pH values are prese...

  10. THREE-DIMENSIONAL LOADING VEHICLE ROUTING PROBLEM SOLUTION WITH SET-PARTITIONING-BASED METHOD

    OpenAIRE

    2013-01-01

    The article considers the optimization problem of vehicle routing with three-dimensional loading constraints. Several practical loading constraints encountered in freight transportation are formalized. The efficiency of using the set-partitioning approach to improve heuristic solution is shown by means of computational experiment.

  11. PM2: A Partitioning-Mining-Measuring Method for Identifying Progressive Changes in Older Adults’ Sleeping Activity

    Directory of Open Access Journals (Sweden)

    Qiang Lin

    2014-01-01

    Full Text Available As people age, their health typically declines, resulting in difficulty in performing daily activities. Sleep-related problems are common issues with older adults, including shifts in circadian rhythms. A detection method is proposed to identify progressive changes in sleeping activity using a three-step process: partitioning, mining, and measuring. Specifically, the original spatiotemporal representation of each sleeping activity instance was first transformed into a sequence of equal-sized segments, or symbols, via a partitioning process. A data-mining-based algorithm was proposed to find symbols that are not present in all instances of a sleeping activity. Finally, a measuring process was responsible for evaluating the changes in these symbols. Experimental evaluation conducted on a group of datasets of older adults showed that the proposed method is able to identify progressive changes in sleeping activity.

  12. Method for Building a Medical Training Simulator with Bayesian Networks: SimDeCS.

    Science.gov (United States)

    Flores, Cecilia Dias; Fonseca, João Marcelo; Bez, Marta Rosecler; Respício, Ana; Coelho, Helder

    2014-01-01

    Distance education has grown in importance with the advent of the internet. An adequate evaluation of students in this mode is still difficult. Distance tests or occasional on-site exams do not meet the needs of evaluation of the learning process for distance education. Bayesian networks are adequate for simulating several aspects of clinical reasoning. The possibility of integrating them in distance education student evaluation has not yet been explored much. The present work describes a Simulator based on probabilistic networks built to represent knowledge of clinical practice guidelines in Family and Community Medicine. The Bayesian Network, the basis of the simulator, was modeled to playable by the student, to give immediate feedback according to pedagogical strategies adapted to the student according to past performance, and to give a broad evaluation of performance at the end of the game. Simulators structured by Bayesian Networks may become alternatives in the evaluation of students of Medical Distance Education.

  13. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  14. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  15. A method of spherical harmonic analysis in the geosciences via hierarchical Bayesian inference

    Science.gov (United States)

    Muir, J. B.; Tkalčić, H.

    2015-11-01

    The problem of decomposing irregular data on the sphere into a set of spherical harmonics is common in many fields of geosciences where it is necessary to build a quantitative understanding of a globally varying field. For example, in global seismology, a compressional or shear wave speed that emerges from tomographic images is used to interpret current state and composition of the mantle, and in geomagnetism, secular variation of magnetic field intensity measured at the surface is studied to better understand the changes in the Earth's core. Optimization methods are widely used for spherical harmonic analysis of irregular data, but they typically do not treat the dependence of the uncertainty estimates on the imposed regularization. This can cause significant difficulties in interpretation, especially when the best-fit model requires more variables as a result of underestimating data noise. Here, with the above limitations in mind, the problem of spherical harmonic expansion of irregular data is treated within the hierarchical Bayesian framework. The hierarchical approach significantly simplifies the problem by removing the need for regularization terms and user-supplied noise estimates. The use of the corrected Akaike Information Criterion for picking the optimal maximum degree of spherical harmonic expansion and the resulting spherical harmonic analyses are first illustrated on a noisy synthetic data set. Subsequently, the method is applied to two global data sets sensitive to the Earth's inner core and lowermost mantle, consisting of PKPab-df and PcP-P differential traveltime residuals relative to a spherically symmetric Earth model. The posterior probability distributions for each spherical harmonic coefficient are calculated via Markov Chain Monte Carlo sampling; the uncertainty obtained for the coefficients thus reflects the noise present in the real data and the imperfections in the spherical harmonic expansion.

  16. Probabilistic Inferences in Bayesian Networks

    OpenAIRE

    Ding, Jianguo

    2010-01-01

    This chapter summarizes the popular inferences methods in Bayesian networks. The results demonstrates that the evidence can propagated across the Bayesian networks by any links, whatever it is forward or backward or intercausal style. The belief updating of Bayesian networks can be obtained by various available inference techniques. Theoretically, exact inferences in Bayesian networks is feasible and manageable. However, the computing and inference is NP-hard. That means, in applications, in ...

  17. A simple method to optimize the HSCCC two-phase solvent system by predicting the partition coefficient for target compound.

    Science.gov (United States)

    Han, Quan-Bin; Wong, Lina; Yang, Nian-Yun; Song, Jing-Zheng; Qiao, Chun-Feng; Yiu, Hillary; Ito, Yoichiro; Xu, Hong-Xi

    2008-04-01

    A simple method was developed to optimize the solvent ratio of the two-phase solvent system used in the high-speed counter-current chromatography (HSCCC) separation. Some mathematic equations, such as the exponential and the power equations, were established to describe the relationship between the solvent ratio and the partition coefficient. Using this new method, the two-phase solvent system was easily optimized to obtain a proper partition coefficient for the CCC separation of the target compound. Furthermore, this method was satisfactorily applied in determining the two-phase solvent system for the HSCCC preparation of pseudolaric acid B from the Chinese herb Pseudolarix kaempferi Gordon (Pinaceae). The two-phase solvent system of n-hexane/EtOAc/MeOH/H(2)O (5:5:5:5 by volume) was used with a good partition coefficient K = 1.08. As a result, 232.05 mg of pseudolaric acid B was yielded from 0.5 g of the crude extract with a purity of 97.26% by HPLC analysis.

  18. A Robust Computational Method for Coupled Liquid-liquid Phase Separation and Gas-particle Partitioning Predictions of Multicomponent Aerosols

    Science.gov (United States)

    Zuend, A.; Di Stefano, A.

    2014-12-01

    Providing efficient and reliable model predictions for the partitioning of atmospheric aerosol components between different phases (gas, liquids, solids) is a challenging problem. The partitioning of water, various semivolatile organic components, inorganic acids, bases, and salts, depends simultaneously on the chemical properties and interaction effects among all constituents of a gas + aerosol system. The effects of hygroscopic particle growth on the water contents and physical states of potentially two or more liquid and/or solid aerosol phases in turn may significantly affect multiphase chemistry, the direct effect of aerosols on climate, and the ability of specific particles to act as cloud condensation or ice nuclei. Considering the presence of a liquid-liquid phase separation in aerosol particles, which typically leads to one phase being enriched in rather hydrophobic compounds and the other phase enriched in water and dissolved electrolytes, adds a high degree of complexity to the goal of predicting the gas-particle partitioning of all components. Coupled gas-particle partitioning and phase separation methods are required to correctly account for the phase behaviour of aerosols exposed to varying environmental conditions, such as changes to relative humidity. We present new theoretical insights and a substantially improved algorithm for the reliable prediction of gas-particle partitioning at thermodynamic equilibrium based on the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) model. We introduce a new approach for the accurate prediction of the phase distribution of multiple inorganic ions between two liquid phases, constrained by charge balance, and the coupling of the liquid-liquid equilibrium model to a robust gas-particle partitioning algorithm. Such coupled models are useful for exploring the range of environmental conditions leading to complete or incomplete miscibility of aerosol constituents which will affect

  19. Quantifying habitat requirements of tree-living species in fragmented boreal forests with Bayesian methods.

    Science.gov (United States)

    Berglund, Håkan; O'Hara, Robert B; Jonsson, Bengt Gunnar

    2009-10-01

    Quantitative conservation objectives require detailed consideration of the habitat requirements of target species. Tree-living bryophytes, lichens, and fungi are a critical and declining biodiversity component of boreal forests. To understand their requirements, Bayesian methods were used to analyze the relationships between the occurrence of individual species and habitat factors at the tree and the stand scale in a naturally fragmented boreal forest landscape. The importance of unexplained between-stand variation in occurrence of species was estimated, and the ability of derived models to predict species' occurrence was tested. The occurrence of species was affected by quality of individual trees. Furthermore, the relationships between occurrence of species at the tree level and size and shape of stands indicated edge effects, implying that some species were restricted to interior habitats of large, regular stands. Yet for the habitat factors studied, requirements of many species appeared similar. Species occurrence also varied between stands; most of the seemingly suitable trees in some stands were unoccupied. The models captured most variation in species occurrence at tree level. They also successfully accounted for between-stand variation in species occurrence, thus providing realistic simulations of stand-level occupancy of species. Important unexplained between-stand variation in species occurrence warns against a simplified view that only local habitat factors influence species' occurrence. Apparently, similar stands will host populations of different sizes due to historical, spatial, and stochastic factors. Thus, habitat suitability cannot be assessed simply by population sizes, and stands lacking a species may still provide suitable habitat and merit protection.

  20. Estimates of European emissions of methyl chloroform using a Bayesian inversion method

    Directory of Open Access Journals (Sweden)

    M. Maione

    2014-03-01

    Full Text Available Methyl chloroform (MCF is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr−1 in 2002 to 0.32 Gg yr−1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr−1 in 2002 and 0.20 Gg yr−1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30–90° N emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003 to a maximum of 10.3% (in 2009 of the global MCF emissions.

  1. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming;

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...

  2. A Bayesian Method for Evaluating Passing Scores: The PPoP Curve

    Science.gov (United States)

    Wainer, Howard; Wang, X. A.; Skorupski, William P.; Bradlow, Eric T.

    2005-01-01

    In this note, we demonstrate an interesting use of the posterior distributions (and corresponding posterior samples of proficiency) that are yielded by fitting a fully Bayesian test scoring model to a complex assessment. Specifically, we examine the efficacy of the test in combination with the specific passing score that was chosen through expert…

  3. The estimation of lower refractivity uncertainty from radar sea clutter using the Bayesian-MCMC method

    Institute of Scientific and Technical Information of China (English)

    Sheng Zheng

    2013-01-01

    The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem.This paper deals with the RFC problem in a Bayesian framework.It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique,which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework.In contrast to the global optimization algorithm,the Bayesian-MCMC can obtain not only the approximate solutions,but also the probability distributions of the solutions,that is,uncertainty analyses of solutions.The Bayesian-MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar seaclutter data.Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter.The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.

  4. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  5. A Hierarchical Bayesian M/EEG Imaging Method Correcting for Incomplete Spatio-Temporal Priors

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke;

    2013-01-01

    In this paper we present a hierarchical Bayesian model, to tackle the highly ill-posed problem that follows with MEG and EEG source imaging. Our model promotes spatiotemporal patterns through the use of both spatial and temporal basis functions. While in contrast to most previous spatio-temporal ...

  6. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  7. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  8. Combinatorics of set partitions

    CERN Document Server

    Mansour, Toufik

    2012-01-01

    Focusing on a very active area of mathematical research in the last decade, Combinatorics of Set Partitions presents methods used in the combinatorics of pattern avoidance and pattern enumeration in set partitions. Designed for students and researchers in discrete mathematics, the book is a one-stop reference on the results and research activities of set partitions from 1500 A.D. to today. Each chapter gives historical perspectives and contrasts different approaches, including generating functions, kernel method, block decomposition method, generating tree, and Wilf equivalences. Methods and d

  9. A FORTRAN program for the statistical analysis of incomplete time series data sets by a method of partition.

    Science.gov (United States)

    Patel, M K; Waterhouse, J P

    1993-03-01

    A program written in FORTRAN-77 which executes an analysis for periodicity of a time series data set is presented. Time series analysis now has applicability and use in a wide range of biomedical studies. The analytical method termed here a method of partition is derived from periodogram analysis, but uses the principle of analysis of variance (ANOVA). It is effective when used on incomplete data sets. An example in which a data set is made progressively more incomplete by the random removal of values demonstrates this, and a listing of the program and a sample output in both an abbreviated and a full version are given.

  10. Fault Localization Method by Partitioning Memory Using Memory Map and the Stack for Automotive ECU Software Testing

    Directory of Open Access Journals (Sweden)

    Kwanhyo Kim

    2016-09-01

    Full Text Available Recently, the usage of the automotive Electronic Control Unit (ECU and its software in cars is increasing. Therefore, as the functional complexity of such software increases, so does the likelihood of software-related faults. Therefore, it is important to ensure the reliability of ECU software in order to ensure automobile safety. For this reason, systematic testing methods are required that can guarantee software quality. However, it is difficult to locate a fault during testing with the current ECU development system because a tester performs the black-box testing using a Hardware-in-the-Loop (HiL simulator. Consequently, developers consume a large amount of money and time for debugging because they perform debugging without any information about the location of the fault. In this paper, we propose a method for localizing the fault utilizing memory information during black-box testing. This is likely to be of use to developers who debug automotive software. In order to observe whether symbols stored in the memory have been updated, the memory is partitioned by a memory map and the stack, thus the fault candidate region is reduced. A memory map method has the advantage of being able to finely partition the memory, and the stack method can partition the memory without a memory map. We validated these methods by applying these to HiL testing of the ECU for a body control system. The preliminary results indicate that a memory map and the stack reduce the possible fault locations to 22% and 19% of the updated memory, respectively.

  11. The Power of Principled Bayesian Methods in the Study of Stellar Evolution

    CERN Document Server

    von Hippel, Ted; Stenning, David C; Robinson, Elliot; Jeffery, Elizabeth; Stein, Nathan; Jefferys, William H; O'Malley, Erin

    2016-01-01

    It takes years of effort employing the best telescopes and instruments to obtain high-quality stellar photometry, astrometry, and spectroscopy. Stellar evolution models contain the experience of lifetimes of theoretical calculations and testing. Yet most astronomers fit these valuable models to these precious datasets by eye. We show that a principled Bayesian approach to fitting models to stellar data yields substantially more information over a range of stellar astrophysics. We highlight advances in determining the ages of star clusters, mass ratios of binary stars, limitations in the accuracy of stellar models, post-main-sequence mass loss, and the ages of individual white dwarfs. We also outline a number of unsolved problems that would benefit from principled Bayesian analyses.

  12. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  13. Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre

    2007-05-01

    Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.

  14. Application of Bayesian least absolute shrinkage and selection operator (LASSO) and BayesCπ methods for genomic selection in French Holstein and Montbéliarde breeds.

    Science.gov (United States)

    Colombani, C; Legarra, A; Fritz, S; Guillaume, F; Croiseau, P; Ducrocq, V; Robert-Granié, C

    2013-01-01

    Recently, the amount of available single nucleotide polymorphism (SNP) marker data has considerably increased in dairy cattle breeds, both for research purposes and for application in commercial breeding and selection programs. Bayesian methods are currently used in the genomic evaluation of dairy cattle to handle very large sets of explanatory variables with a limited number of observations. In this study, we applied 2 bayesian methods, BayesCπ and bayesian least absolute shrinkage and selection operator (LASSO), to 2 genotyped and phenotyped reference populations consisting of 3,940 Holstein bulls and 1,172 Montbéliarde bulls with approximately 40,000 polymorphic SNP. We compared the accuracy of the bayesian methods for the prediction of 3 traits (milk yield, fat content, and conception rate) with pedigree-based BLUP, genomic BLUP, partial least squares (PLS) regression, and sparse PLS regression, a variable selection PLS variant. The results showed that the correlations between observed and predicted phenotypes were similar in BayesCπ (including or not pedigree information) and bayesian LASSO for most of the traits and whatever the breed. In the Holstein breed, bayesian methods led to higher correlations than other approaches for fat content and were similar to genomic BLUP for milk yield and to genomic BLUP and PLS regression for the conception rate. In the Montbéliarde breed, no method dominated the others, except BayesCπ for fat content. The better performances of the bayesian methods for fat content in Holstein and Montbéliarde breeds are probably due to the effect of the DGAT1 gene. The SNP identified by the BayesCπ, bayesian LASSO, and sparse PLS regression methods, based on their effect on the different traits of interest, were located at almost the same position on the genome. As the bayesian methods resulted in regressions of direct genomic values on daughter trait deviations closer to 1 than for the other methods tested in this study, bayesian

  15. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    Science.gov (United States)

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed.

  16. Bayesian Network Assessment Method for Civil Aviation Safety Based on Flight Delays

    OpenAIRE

    2013-01-01

    Flight delays and safety are the principal contradictions in the sound development of civil aviation. Flight delays often come up and induce civil aviation safety risk simultaneously. Based on flight delays, the random characteristics of civil aviation safety risk are analyzed. Flight delays have been deemed to a potential safety hazard. The change rules and characteristics of civil aviation safety risk based on flight delays have been analyzed. Bayesian networks (BN) have been used to build ...

  17. Bayesian psychometric scaling

    NARCIS (Netherlands)

    Fox, G.J.A.; Berg, van den S.M.; Veldkamp, B.P.; Irwing, P.; Booth, T.; Hughes, D.

    2015-01-01

    In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item resp

  18. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  19. A new 3-D ray tracing method based on LTI using successive partitioning of cell interfaces and traveltime gradients

    Science.gov (United States)

    Zhang, Dong; Zhang, Ting-Ting; Zhang, Xiao-Lei; Yang, Yan; Hu, Ying; Qin, Qian-Qing

    2013-05-01

    We present a new method of three-dimensional (3-D) seismic ray tracing, based on an improvement to the linear traveltime interpolation (LTI) ray tracing algorithm. This new technique involves two separate steps. The first involves a forward calculation based on the LTI method and the dynamic successive partitioning scheme, which is applied to calculate traveltimes on cell boundaries and assumes a wavefront that expands from the source to all grid nodes in the computational domain. We locate several dynamic successive partition points on a cell's surface, the traveltimes of which can be calculated by linear interpolation between the vertices of the cell's boundary. The second is a backward step that uses Fermat's principle and the fact that the ray path is always perpendicular to the wavefront and follows the negative traveltime gradient. In this process, the first-arriving ray path can be traced from the receiver to the source along the negative traveltime gradient, which can be calculated by reconstructing the continuous traveltime field with cubic B-spline interpolation. This new 3-D ray tracing method is compared with the LTI method and the shortest path method (SPM) through a number of numerical experiments. These comparisons show obvious improvements to computed traveltimes and ray paths, both in precision and computational efficiency.

  20. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    Science.gov (United States)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  1. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  2. Novel Method for Calculating a Nonsubjective Informative Prior for a Bayesian Model in Toxicology Screening: A Theoretical Framework.

    Science.gov (United States)

    Woldegebriel, Michael

    2015-11-17

    In toxicology screening (forensic, food-safety), due to several analytical errors (e.g., retention time shift, lack of repeatability in m/z scans, etc.), the ability to confidently identify/confirm a compound remains a challenge. Due to these uncertainties, a probabilistic approach is currently preferred. However, if a probabilistic approach is followed, the only statistical method that is capable of estimating the probability of whether the compound of interest (COI) is present/absent in a given sample is Bayesian statistics. Bayes' theorem can combine prior information (prior probability) with data (likelihood) to give an optimal probability (posterior probability) reflecting the presence/absence of the COI. In this work, a novel method for calculating an informative prior probability for a Bayesian model in targeted toxicology screening is introduced. In contrast to earlier proposals making use of literature citation rates and the prior knowledge of the analyst, this method presents a thorough and nonsubjective approach. The formulation approaches the probability calculation as a clustering and random draw problem that incorporates few analytical method parameters meticulously estimated to reflect sensitivity and specificity of the system. The practicality of the method has been demonstrated and validated using real data and simulated analytical techniques.

  3. Control Method of Three-level Neutral-point-clamped Inverter Based on Voltage Vector Diagram Partition

    Institute of Scientific and Technical Information of China (English)

    SONG Wen-xiang; YAO Gang; CHEN Chen; CHEN Guo-cheng

    2008-01-01

    A new modulation approach was presented for the control of neutral-point (NP) voltage variation in the three-level NP-clamped voltage source inverter, and the average NP current model was established based on vector diagram partition. Thus, theory base was built for balancing control of NP potential. Theoretical analysis and experimental results indicate that the proposed method for NP balancing control vector synthesizing concept based can make the average NP current zero, and do not influence NP potential within every sample period. The effectiveness of proposed research approach was verified by simulative and experimental results.

  4. Risk Assessment Framework and Algorithm of Power Systems Based on the Partitioned Multi-objective Risk Method

    Institute of Scientific and Technical Information of China (English)

    XIE Shaoyu; WANG Xiuli; WANG Xifan

    2011-01-01

    The average risk indices, such as the loss of load expectation (LOLE) and expected demand not supplied (EDNS), have been widely used in risk assessment of power systems. However, the average indices can't distinguish between the events of low probability but high damage and the events of high probability but low damage. In order to ov+rcome these shortcomings, this paper proposes an extended risk analysis framework for the power system based on the partitioned multi-objective risk method (PMRM).

  5. The Partition of Unity Method for High-Order Finite Volume Schemes Using Radial Basis Functions Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Serena Morigi; Fiorella Sgallari

    2009-01-01

    This paper introduces the use of partition of unity method for the develop-ment of a high order finite volume discretization scheme on unstructured grids for solv-ing diffusion models based on partial differential equations. The unknown function and its gradient can be accurately reconstructed using high order optimal recovery based on radial basis functions. The methodology proposed is applied to the noise removal prob-lem in functional surfaces and images. Numerical results demonstrate the effectiveness of the new numerical approach and provide experimental order of convergence.

  6. Medical diagnosis aboard submarines. Use of a computer-based Bayesian method of analysis in an abdominal pain diagnostic program.

    Science.gov (United States)

    Osborne, S F

    1984-02-01

    The medical issues that arise in the isolated environment of a submarine can occasionally be grave. While crewmembers are carefully screened for health problems, they are still susceptible to serious acute illness. Currently, the submarine medical department representative, the hospital corpsman, utilizes a history and physical examination, clinical acumen, and limited laboratory testing in diagnosis. The application of a Bayesian method of analysis to an abdominal pain diagnostic system utilizing an onboard microcomputer is described herein. Early results from sea trials show an appropriate diagnosis in eight of 10 cases of abdominal pain, but the program should still be viewed as an extended "laboratory test" until proved effective at sea.

  7. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  8. A hybrid Bayesian-SVD based method to detect false alarms in PERSIANN precipitation estimation product using related physical parameters

    Science.gov (United States)

    Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.

    2016-07-01

    Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.

  9. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guannan [ORNL; Webster, Clayton G [ORNL; Gunzburger, Max D [ORNL

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  10. Applications of PDMS partitioning methods in the study of biodegradation of pyrene in the

    DEFF Research Database (Denmark)

    Tejeda-Agredano, MC; Gouliarmou, Varvara; Ortega-Calvo, JJ

    to the physical association of bacteria and HS. Here, we propose the use of partitioning techniques using poly(dimethylsiloxane) (PDMS) to study the effect of binding of pyrene to a dissolved humic acid isolated from soil on biodegradation of this PAH by a representative soil bacterium. The application...... of these techniques in biodegradation studies may solve many questions about enhancements in diffusive mass transfer, in capacity/speciation and in dissolution. Therefore, our study may provide new insights into the effects of HS on microbial degradation of polycyclic aromatic hydrocarbons (PAHs).......Although there are reports on the inhibition of anthropogenic organic chemicals biodegradation due to binding to dissolved humic substances (HS), there is an increasing body of evidence pointing to an enhancing effect in the case of hydrophobic chemicals, like pyrene. The addition of humic...

  11. Bayesian Lensing Shear Measurement

    CERN Document Server

    Bernstein, Gary M

    2013-01-01

    We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...

  12. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  13. How about a Bayesian M/EEG imaging method correcting for incomplete spatio-temporal priors

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke;

    2013-01-01

    In this contribution we present a hierarchical Bayesian model, sAquavit, to tackle the highly ill-posed problem that follows with MEG and EEG source imaging. Our model facilitates spatio-temporal patterns through the use of both spatial and temporal basis functions. While in contrast to most...... previous spatio-temporal inverse M/EEG models, the proposed model benefits of consisting of two source terms, namely, a spatio-temporal pattern term limiting the source configuration to a spatio-temporal subspace and a source correcting term to pick up source activity not covered by the spatio...

  14. Dose-Response Modeling Under Simple Order Restrictions Using Bayesian Variable Selection Methods

    OpenAIRE

    Otava, Martin; Shkedy, Ziv; Lin, Dan; Goehlmann, Hinrich W. H.; Bijnens, Luc; Talloen, Willem; Kasim, Adetayo

    2014-01-01

    Bayesian modeling of dose–response data offers the possibility to establish the relationship between a clinical or a genomic response and increasing doses of a therapeutic compound and to determine the nature of the relationship wherever it exists. In this article, we focus on an order-restricted one-way ANOVA model which can be used to test the null hypothesis of no dose effect against an ordered alternative. Within the framework of the dose–response modeling, a model uncertainty can be addr...

  15. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  16. Bayesian methods for the design and interpretation of clinical trials in very rare diseases.

    Science.gov (United States)

    Hampson, Lisa V; Whitehead, John; Eleftheriou, Despina; Brogan, Paul

    2014-10-30

    This paper considers the design and interpretation of clinical trials comparing treatments for conditions so rare that worldwide recruitment efforts are likely to yield total sample sizes of 50 or fewer, even when patients are recruited over several years. For such studies, the sample size needed to meet a conventional frequentist power requirement is clearly infeasible. Rather, the expectation of any such trial has to be limited to the generation of an improved understanding of treatment options. We propose a Bayesian approach for the conduct of rare-disease trials comparing an experimental treatment with a control where patient responses are classified as a success or failure. A systematic elicitation from clinicians of their beliefs concerning treatment efficacy is used to establish Bayesian priors for unknown model parameters. The process of determining the prior is described, including the possibility of formally considering results from related trials. As sample sizes are small, it is possible to compute all possible posterior distributions of the two success rates. A number of allocation ratios between the two treatment groups can be considered with a view to maximising the prior probability that the trial concludes recommending the new treatment when in fact it is non-inferior to control. Consideration of the extent to which opinion can be changed, even by data from the best feasible design, can help to determine whether such a trial is worthwhile.

  17. [Determination of partition coefficient of dissolved gases in transformer oil using phase ratio variation method and static headspace gas chromatography].

    Science.gov (United States)

    Zhao, Jinghong; Wang, Hailong; Liu, Wenmin; Zhou, Yansheng; Guan, Yafeng

    2004-05-01

    The partition coefficients of dissolved gases in transformer oil were determined using a phase ratio variation method and static headspace gas chromatography (GC). A pressure balancing and gas volume-metering device was connected to the vent of a sample loop on a six-port injection valve of the GC. The gas phase sample from the headspace vial of 25 mL was transferred to an 80 microL sample-loop through a fused silica capillary of 0.53 mm i.d., and then separated and determined quantitatively by GC. A 2 m x 1 mm i.d. GDX502 micro-packed column was used for the separation. Five different gas-liquid volume ratios in the headspace vials were measured at different equilibrium concentrations. The partition coefficients of hydrocarbon gases including methane, acetylene, ethylene, ethane and propane dissolved in transformer oil were determined by using linear regression analysis at 20 degrees C and 50 degrees C separately. The errors between the real values and regression values from experimental data were less than 4.14% except methane. Fundamental data for on-line measurement of dissolved gases in transformer oil are provided by GC.

  18. Removal of radionuclides from partitioning waste solutions by adsorption and catalytic oxidation methods

    Energy Technology Data Exchange (ETDEWEB)

    Yamagishi, Isao; Yamaguchi, Isoo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kubota, Masumitsu [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2000-09-01

    Adsorption of radionuclides with inorganic ion exchangers and catalytic oxidation of a complexant were studied for the decontamination of waste solutions generated in past partitioning tests with high-level liquid waste. Granulated ferrocyanide and titanic acid were used for adsorption of Cs and Sr, respectively, from an alkaline solution resulting from direct neutralization of an acidic waste solution. Both Na and Ba inhibited adsorption of Sr but Na did not that of Cs. These exchangers adsorbed Cs and Sr at low concentration with distribution coefficients of more than 10{sup 4}ml/g from 2M Na solution of pH11. Overall decontamination factors (DFs) of Cs and total {beta} nuclides exceeded 10{sup 5} and 10{sup 3}, respectively, at the neutralization-adsorption step of actual waste solutions free from a complexant. The DF of total {alpha} nuclides was less than 10{sup 3} for a waste solution containing diethylenetriaminepentaacetic acid (DTPA). DTPA was rapidly oxidized by nitric acid in the presence of a platinum catalyst, and radionuclides were removed as precipitates by neutralization of the resultant solution. The DF of {alpha} nuclides increased to 8x10{sup 4} by addition of the oxidation step. The DFs of Sb and Co were quite low through the adsorption step. A synthesized Ti-base exchanger (PTC) could remove Sb with the DF of more than 4x10{sup 3}. (author)

  19. Assessing Vermont's stream health and biological integrity using artificial neural networks and Bayesian methods

    Science.gov (United States)

    Rizzo, D. M.; Fytilis, N.; Stevens, L.

    2012-12-01

    Environmental managers are increasingly required to monitor and forecast long-term effects and vulnerability of biophysical systems to human-generated stresses. Ideally, a study involving both physical and biological assessments conducted concurrently (in space and time) could provide a better understanding of the mechanisms and complex relationships. However, costs and resources associated with monitoring the complex linkages between the physical, geomorphic and habitat conditions and the biological integrity of stream reaches are prohibitive. Researchers have used classification techniques to place individual streams and rivers into a broader spatial context (hydrologic or health condition). Such efforts require environmental managers to gather multiple forms of information - quantitative, qualitative and subjective. We research and develop a novel classification tool that combines self-organizing maps with a Naïve Bayesian classifier to direct resources to stream reaches most in need. The Vermont Agency of Natural Resources has developed and adopted protocols for physical stream geomorphic and habitat assessments throughout the state of Vermont. Separate from these assessments, the Vermont Department of Environmental Conservation monitors the biological communities and the water quality in streams. Our initial hypothesis is that the geomorphic reach assessments and water quality data may be leveraged to reduce error and uncertainty associated with predictions of biological integrity and stream health. We test our hypothesis using over 2500 Vermont stream reaches (~1371 stream miles) assessed by the two agencies. In the development of this work, we combine a Naïve Bayesian classifier with a modified Kohonen Self-Organizing Map (SOM). The SOM is an unsupervised artificial neural network that autonomously analyzes inherent dataset properties using input data only. It is typically used to cluster data into similar categories when a priori classes do not exist. The

  20. 43 genes support the lungfish-coelacanth grouping related to the closest living relative of tetrapods with the Bayesian method under the coalescence model

    Directory of Open Access Journals (Sweden)

    Gras Robin

    2011-03-01

    Full Text Available Abstract Background Since the discovery of the "living fossil" in 1938, the coelacanth (Latimeria chalumnae has generally been considered to be the closest living relative of the land vertebrates, and this is still the prevailing opinion in most general biology textbooks. However, the origin of tetrapods has not been resolved for decades. Three principal hypotheses (lungfish-tetrapod, coelacanth-tetrapod, or lungfish-coelacanth sister group have been proposed. Findings We used the Bayesian method under the coalescence model with the latest published program (Bayesian Estimation of Species Trees, or BEST to perform a phylogenetic analysis for seven relevant taxa and 43 nuclear protein-coding genes with the jackknife method for taxon sub-sampling. The lungfish-coelacanth sister group was consistently reconstructed with the Bayesian method under the coalescence model in 17 out of 21 taxon sets with a Bayesian posterior probability as high as 99%. Lungfish-tetrapod was only inferred from BCLS and BACLS. Neither coelacanth-tetrapod nor lungfish-coelacanth-tetrapod was recovered out of all 21 taxon sets. Conclusions Our results provide strong evidence in favor of accepting the hypothesis that lungfishes and coelacanths form a monophyletic sister-group that is the closest living relative of tetrapods. This clade was supported by high Bayesian posterior probabilities of the branch (a lungfish-coelacanth clade and high taxon jackknife supports.

  1. A heuristic Bayesian method for segmenting DNA sequence alignments and detecting evidence for recombination and gene conversion.

    Science.gov (United States)

    Kedzierska, Anna; Husmeier, Dirk

    2006-01-01

    We propose a heuristic approach to the detection of evidence for recombination and gene conversion in multiple DNA sequence alignments. The proposed method consists of two stages. In the first stage, a sliding window is moved along the DNA sequence alignment, and phylogenetic trees are sampled from the conditional posterior distribution with MCMC. To reduce the noise intrinsic to inference from the limited amount of data available in the typically short sliding window, a clustering algorithm based on the Robinson-Foulds distance is applied to the trees thus sampled, and the posterior distribution over tree clusters is obtained for each window position. While changes in this posterior distribution are indicative of recombination or gene conversion events, it is difficult to decide when such a change is statistically significant. This problem is addressed in the second stage of the proposed algorithm, where the distributions obtained in the first stage are post-processed with a Bayesian hidden Markov model (HMM). The emission states of the HMM are associated with posterior distributions over phylogenetic tree topology clusters. The hidden states of the HMM indicate putative recombinant segments. Inference is done in a Bayesian sense, sampling parameters from the posterior distribution with MCMC. Of particular interest is the determination of the number of hidden states as an indication of the number of putative recombinant regions. To this end, we apply reversible jump MCMC, and sample the number of hidden states from the respective posterior distribution.

  2. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  3. Evaluation of the antibacterial residue surveillance programme in Danish pigs using Bayesian methods

    DEFF Research Database (Denmark)

    Freitas de Matos Baptista, Filipa; Alban, L.; Olsen, A. M.;

    2012-01-01

    Residues of pharmacological active substances or their metabolites might be found in food products from food-producing animals. Maximum Residue Limits for pharmacological active substances in foodstuffs of animal origin are established to assure high food safety standards. Each year, more than 20......,000 samples are analysed for the presence of antibacterial residues in Danish pigs. This corresponds to 0.1% of the size of the slaughter pig population and more than 1% of the sows slaughtered. In this study, a Bayesian model was used to evaluate the Danish surveillance system accuracy and to investigate...... increasing or maintaining the probability of detection. Hence, the antibacterial residue surveillance programme in Danish pigs would be more cost-effective than today....

  4. Bayesian methods for the physical sciences learning from examples in astronomy and physics

    CERN Document Server

    Andreon, Stefano

    2015-01-01

    Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.

  5. Development of partitioning method. Adsorption of cesium with mordenite in acidic media

    Energy Technology Data Exchange (ETDEWEB)

    Donnet, L.; Morita, Yasuji; Yamagishi, Isao; Kubota, Masumitsu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-10-01

    Adsorption of cesium with mordenite from a acidic solution, typically from a 0.5 mol/L nitric acid solution, was studied to examine the possibility to design a new separation scheme for partitioning of high-level liquid waste. Batch adsorption experiments showed that three mordenites examined (natural mordenite and two synthetic mordenites Zeolon 900Na and 900H) have very close behavior with the parameters of adsorption kinetics, the saturation capacity by Langmuir equation, the distribution coefficient of Cs and adsorption of other elements. In the Cs adsorption with the natural mordenite at 0.5 mol/L nitric acid, distribution coefficient was 1150 ml/g and the saturation capacity was 0.64 mmol/g. In the adsorption of Cs on column using the natural mordenite, the flow rate of the Cs solution modified only the 5% breakthrough point and gave no influence on the total capacity of Cs. Column experiments with a mixed solution of Cs, Rb, Na, Ba, Sr, Cr, Ni, Ru, Rh and Pd showed that cesium was adsorbed very selectively. Only about 4% of rubidium in a molar ratio were retained in the column. The total quantity of Cs and Rb adsorbed was 0.51 mmol/g at 0.5 mol/L nitric acid. Elution of Cs (and Rb) with 4 mol/L nitric acid was performed against the column of the loaded natural mordenite. The adsorbed Cs and Rb were well eluted, and a good mass balance was obtained between the adsorbed quantity by breakthrough curves and the quantity found in the eluate. (author)

  6. Carbon partitioning in photosynthesis.

    Science.gov (United States)

    Melis, Anastasios

    2013-06-01

    The work seeks to raise awareness of a fundamental problem that impacts the renewable generation of fuels and chemicals via (photo)synthetic biology. At issue is regulation of the endogenous cellular carbon partitioning between different biosynthetic pathways, over which the living cell exerts stringent control. The regulation of carbon partitioning in photosynthesis is not understood. In plants, microalgae and cyanobacteria, methods need be devised to alter photosynthetic carbon partitioning between the sugar, terpenoid, and fatty acid biosynthetic pathways, to lower the prevalence of sugar biosynthesis and correspondingly upregulate terpenoid and fatty acid hydrocarbons production in the cell. Insight from unusual but naturally occurring carbon-partitioning processes can help in the design of blueprints for improved photosynthetic fuels and chemicals production.

  7. Bayesian Face Sketch Synthesis.

    Science.gov (United States)

    Wang, Nannan; Gao, Xinbo; Sun, Leiyu; Li, Jie

    2017-03-01

    Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.

  8. A Bayesian Method for Short-Term Probabilistic Forecasting of Photovoltaic Generation in Smart Grid Operation and Control

    Directory of Open Access Journals (Sweden)

    Gabriella Ferruzzi

    2013-02-01

    Full Text Available A new short-term probabilistic forecasting method is proposed to predict the probability density function of the hourly active power generated by a photovoltaic system. Firstly, the probability density function of the hourly clearness index is forecasted making use of a Bayesian auto regressive time series model; the model takes into account the dependence of the solar radiation on some meteorological variables, such as the cloud cover and humidity. Then, a Monte Carlo simulation procedure is used to evaluate the predictive probability density function of the hourly active power by applying the photovoltaic system model to the random sampling of the clearness index distribution. A numerical application demonstrates the effectiveness and advantages of the proposed forecasting method.

  9. Partitioning characteristics of gas channel of coal-rock mass in mining space and gas orientation method

    Institute of Scientific and Technical Information of China (English)

    Zhao Zhiqiang; Ma Nianjie; Jia Housheng; Cheng Yuanping

    2013-01-01

    In order to research the influence of coal-rock mass morphology of mining space on the flow law of gas, the laboratory physical model and numerical computation methods were adopted to simulate coal min-ing activities. The simulation results indicate that, after coal seam mining, the loose rock accumulation body of free caving, ordered rock arrangement body of plate damage rich in longitudinal and transverse fractures and horizontal fissure body formed by rock mass deformation imbalance are formed from bottom to top in the mining space. For these three types of accumulation bodies, there are essential differences in the accumulation state, rock size and gas breakover characteristics. According to this, the coal-rock mass in the mining space is classified into gas turbulence channel area, gas transitional flow channel area and gas seepage channel area. In the turbulence channel area, the gas is distributed trans-versely and longitudinally and gas diffuses in the form of convection with Reynolds number Re more than 100;in the transitional flow channel area, one-way or two-way gas channels are crisscross and gas is of transitional flow regime with Re between 10 and 100. In the seepage channel area, there are a few vertical gas channels with Re less than 10. In this paper, the researches on the gas orientation method in different partitions were further carried out, gas orientation methods of low-level pipe burying, middle-level interception and high-level extraction were determined and an on-site industrial test was conducted, achieving the effective diversion of gas and verifying the reasonableness of gas channel partition.

  10. Bayesian statistics

    OpenAIRE

    新家, 健精

    2013-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  11. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neura

  12. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    Science.gov (United States)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  13. Construction and Analysis of Multi-Rate Partitioned Runge-Kutta Methods

    Science.gov (United States)

    2012-06-01

    xiii LIST OF ACRONYMS, ABBREVIATIONS, AND TERMS ABP Adams-Bashforth method of order p BDFP Backwards Differentiation Formula of order p...the Adams methods. 1. Adams Methods Within the Adams family of multi-step methods, the two most commonly used are Adams-Bashforth of order p, ( ABP ...look at ABP , as these methods are explicit in time, whereas the AMP are all implicit in time. The general formula for the Adams-Bashforth method

  14. 40 CFR 799.6755 - TSCA partition coefficient (n-octanol/water), shake flask method.

    Science.gov (United States)

    2010-07-01

    ... organometallic compounds. (4) Alternative methods. High-pressure liquid chromatography (HPLC) methods described... coefficient by high pressure liquid chromatography. Journal of Medicinal Chemistry 19:615 (1976). (6)...

  15. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak.

    Science.gov (United States)

    Alkhamis, Mohammad A; Perez, Andres M; Murtaugh, Michael P; Wang, Xiong; Morrison, Robert B

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  16. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Directory of Open Access Journals (Sweden)

    Mohammad A. Alkhamis

    2016-02-01

    Full Text Available Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control and prevention resources. Bayesian phylodynamic models have recently been used to test research hypothesis related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV and, to the authors’ knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95 and revealed significant dispersal routes (Bayes factor > 6 of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results can’t be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic

  17. Inferring genetic architecture of complex traits using Bayesian integrative analysis of genome and transcriptiome data

    DEFF Research Database (Denmark)

    Ehsani, Alireza; Sørensen, Peter; Pomp, Daniel;

    2012-01-01

    Background To understand the genetic architecture of complex traits and bridge the genotype-phenotype gap, it is useful to study intermediate -omics data, e.g. the transcriptome. The present study introduces a method for simultaneous quantification of the contributions from single nucleotide...... polymorphisms (SNPs) and transcript abundances in explaining phenotypic variance, using Bayesian whole-omics models. Bayesian mixed models and variable selection models were used and, based on parameter samples from the model posterior distributions, explained variances were further partitioned at the level......-modal distribution of genomic values collapses, when gene expressions are added to the model Conclusions With increased availability of various -omics data, integrative approaches are promising tools for understanding the genetic architecture of complex traits. Partitioning of explained variances at the chromosome...

  18. A Bayesian Calibration-Prediction Method for Reducing Model-Form Uncertainties with Application in RANS Simulations

    CERN Document Server

    Wu, J -L; Xiao, H

    2015-01-01

    Model-form uncertainties in complex mechanics systems are a major obstacle for predictive simulations. Reducing these uncertainties is critical for stake-holders to make risk-informed decisions based on numerical simulations. For example, Reynolds-Averaged Navier-Stokes (RANS) simulations are increasingly used in mission-critical systems involving turbulent flows. However, for many practical flows the RANS predictions have large model-form uncertainties originating from the uncertainty in the modeled Reynolds stresses. Recently, a physics-informed Bayesian framework has been proposed to quantify and reduce model-form uncertainties in RANS simulations by utilizing sparse observation data. However, in the design stage of engineering systems, measurement data are usually not available. In the present work we extend the original framework to scenarios where there are no available data on the flow to be predicted. In the proposed method, we first calibrate the model discrepancy on a related flow with available dat...

  19. Efficient FM Algorithm for VLSI Circuit Partitioning

    Directory of Open Access Journals (Sweden)

    M.RAJESH

    2013-04-01

    Full Text Available In FM algorithm initial partitioning matrix of the given circuit is assigned randomly, as a result for larger circuit having hundred or more nodes will take long time to arrive at the final partition if theinitial partitioning matrix is close to the final partitioning then the computation time (iteration required is small . Here we have proposed novel approach to arrive at initial partitioning by using spectralfactorization method the results was verified using several circuits.

  20. "K"-Balance Partitioning: An Exact Method with Applications to Generalized Structural Balance and Other Psychological Contexts

    Science.gov (United States)

    Brusco, Michael; Steinley, Douglas

    2010-01-01

    Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into "K" clusters. This "K"-balance partitioning problem also has more…

  1. A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation.

    Science.gov (United States)

    Oliver, Antoni; Canals, Vincent; Rosselló, Josep L

    2017-03-06

    Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule's pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.

  2. Bayesian Methods for Reconstructing Sunspot Numbers Before and During the Maunder Minimum

    Science.gov (United States)

    Travaglini, Guido

    2017-01-01

    The Maunder Minimum (MM) was an extended period of reduced solar activity in terms of yearly sunspot numbers (SSN) during 1610 - 1715. The reality of this "grand minimum" is generally accepted in the scientific community, but the statistics of the SSN record suggest a need for data reconstruction. The MM data show a nonstandard distribution compared with the entire SSN signal (1610 - 2014). The pattern does not satisfy the weakly stationary solar dynamo approximation, which characterizes many natural events spanning centuries or even millennia, including the Sun and the stars. Over the entire observation period (1610 - 2014), the reported SSN exhibits statistically significant regime switches, departures from autoregressive stationarity, and growing trends. Reconstruction of the SSN during the pre-MM and MM periods is performed using five novel statistical procedures in support of signal analysis. A Bayesian-Monte Carlo backcast technique is found to be most reliable and produces an SSN signal that meets the weak-stationarity requirement. The computed MM signal for this reconstruction does not show a "grand" minimum or even a "semi-grand" minimum.

  3. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods.

    Directory of Open Access Journals (Sweden)

    Junjun Yang

    Full Text Available In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE, particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE.

  4. Using Bayesian network and AHP method as a marketing approach tools in defining tourists’ preferences

    Directory of Open Access Journals (Sweden)

    Nataša Papić-Blagojević

    2012-04-01

    Full Text Available Marketing approach is associated to market conditions and achieving long term profitability of a company by satisfying consumers’ needs. This approach in tourism does not have to be related only to promoting one touristic destination, but is associated to relation between travel agency and its clients too. It considers that travel agencies adjust their offers to their clients’ needs. In that sense, it is important to analyze the behavior of tourists in the earlier periods with consideration of their preferences. Using Bayesian network, it could be graphically displayed the connection between tourists who have similar taste and relationships between them. On the other hand, the analytic hierarchy process (AHP is used to rank tourist attractions, with also relying on past experience. In this paper we examine possible applications of these two models in tourism in Serbia. The example is hypothetical, but it will serve as a base for future research. Three types of tourism are chosen as a representative in Vojvodina: Cultural, Rural and Business tourism, because they are the bright spot of touristic development in this area. Applied on these forms, analytic hierarchy process has shown its strength in predicting tourists’ preferences.

  5. Bayesian Network Assessment Method for Civil Aviation Safety Based on Flight Delays

    Directory of Open Access Journals (Sweden)

    Huawei Wang

    2013-01-01

    Full Text Available Flight delays and safety are the principal contradictions in the sound development of civil aviation. Flight delays often come up and induce civil aviation safety risk simultaneously. Based on flight delays, the random characteristics of civil aviation safety risk are analyzed. Flight delays have been deemed to a potential safety hazard. The change rules and characteristics of civil aviation safety risk based on flight delays have been analyzed. Bayesian networks (BN have been used to build the aviation operation safety assessment model based on flight delay. The structure and parameters learning of the model have been researched. By using BN model, some airline in China has been selected to assess safety risk of civil aviation. The civil aviation safety risk of BN model has been assessed by GeNIe software. The research results show that flight delay, which increases the safety risk of civil aviation, can be seen as incremental safety risk. The effectiveness and correctness of the model have been tested and verified.

  6. A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation

    Science.gov (United States)

    Oliver, Antoni; Canals, Vincent; Rosselló, Josep L.

    2017-03-01

    Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule’s pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.

  7. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    Science.gov (United States)

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  8. The Partition of Unity Finite Element Method for the simulation of waves in air and poroelastic media.

    Science.gov (United States)

    Chazot, Jean-Daniel; Perrey-Debain, Emmanuel; Nennig, Benoit

    2014-02-01

    Recently Chazot et al. [J. Sound Vib. 332, 1918-1929 (2013)] applied the Partition of Unity Finite Element Method for the analysis of interior sound fields with absorbing materials. The method was shown to allow a substantial reduction of the number of degrees of freedom compared to the standard Finite Element Method. The work is however restricted to a certain class of absorbing materials that react like an equivalent fluid. This paper presents an extension of the method to the numerical simulation of Biot's waves in poroelastic materials. The technique relies mainly on expanding the elastic displacement as well as the fluid phase pressure using sets of plane waves which are solutions to the governing partial differential equations. To show the interest of the method for tackling problems of practical interests, poroelastic-acoustic coupling conditions as well as fixed or sliding edge conditions are presented and numerically tested. It is shown that the technique is a good candidate for solving noise control problems at medium and high frequency.

  9. A novel method for measuring the diffusion, partition and convective mass transfer coefficients of formaldehyde and VOC in building materials.

    Directory of Open Access Journals (Sweden)

    Jianyin Xiong

    Full Text Available The diffusion coefficient (D(m and material/air partition coefficient (K are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h(m. Compared to traditional methods, it has the following merits: (1 the K, D(m and h(m can be simultaneously obtained, thus is convenient to use; (2 it is time-saving, just one sorption process in airtight chamber is required; (3 the determination of h(m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h(m of semi-volatile organic compounds (SVOC by using that of VOC.

  10. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  11. A pseudo-statistical approach to treat choice uncertainty : the example of partitioning allocation methods

    NARCIS (Netherlands)

    Mendoza, Beltran M.A.; Heijungs, R.; Guinée, J.B.; Tukker, A.

    2016-01-01

    Purpose Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes s

  12. Bayesian reconstruction of P(r) directly from two-dimensional detector images via a Markov chain Monte Carlo method.

    Science.gov (United States)

    Paul, Sudeshna; Friedman, Alan M; Bailey-Kellogg, Chris; Craig, Bruce A

    2013-04-01

    The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In contin-uation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering inten-sities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and Dmax, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these

  13. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  14. STUDY ON AUDIO INFORMATION HIDING METHOD BASED ON MODIFIED PHASE PARTITION

    Institute of Scientific and Technical Information of China (English)

    Tong Ming; Hao Chongyang; Liu Xiaojun; Chen Yanpu

    2005-01-01

    Hiding efficiency of traditional audio information hiding methods is always low since the sentience similarity cannot be guaranteed. A new audio information hiding method is proposed in this letter which can impose the insensitivity with the audio phase for auditory and realize the information hiding through specific algorithm in order to modify local phase within the auditory perception. The algorithm is to introduce the operation of "set 1" and "set 0" for every phase vectors, then the phases must lie on the boundary of a phase area after modified. If it lies on "1" boundary, it comes by set 1 operation. If it lies on "0" boundary, it comes by set 0 operation. The results show that, compared with the legacy method, the proposed method has better auditory similarity, larger information embedding capacity and lower code error rate. As a kind of blind detect method, it fits for application scenario without channel interference.

  15. Evaluation of the antibacterial residue surveillance programme in Danish pigs using Bayesian methods.

    Science.gov (United States)

    Baptista, F M; Alban, L; Olsen, A M; Petersen, J V; Toft, N

    2012-10-01

    Residues of pharmacological active substances or their metabolites might be found in food products from food-producing animals. Maximum Residue Limits for pharmacological active substances in foodstuffs of animal origin are established to assure high food safety standards. Each year, more than 20,000 samples are analysed for the presence of antibacterial residues in Danish pigs. This corresponds to 0.1% of the size of the slaughter pig population and more than 1% of the sows slaughtered. In this study, a Bayesian model was used to evaluate the Danish surveillance system accuracy and to investigate the impact of a potential risk-based sampling approach to the residue surveillance programme in Danish slaughter pigs. Danish surveillance data from 2005 to 2009 and limited knowledge about true prevalence and test sensitivity and specificity were included in the model. According to the model, the true antibacterial residue prevalence in Danish pigs is very low in both sows (∼0.20%) and slaughter pigs (∼0.01%). Despite data constraints, the results suggest that the current screening test used in Denmark presents high sensitivity (85-99%) and very high specificity (>99%) for the most relevant antibacterial classes used in Danish pigs. If high-risk slaughter pigs could be identified by taking into account antibacterial use or meat inspection risk factors, a potential risk-based sampling approach to antibacterial residue surveillance in slaughter pigs would allow reducing the sample size substantially, while increasing or maintaining the probability of detection. Hence, the antibacterial residue surveillance programme in Danish pigs would be more cost-effective than today.

  16. Application of a Bayesian method to data-poor stock assessment by using Indian Ocean albacore (Thunnus alalunga) stock assessment as an example

    Institute of Scientific and Technical Information of China (English)

    GUAN Wenjiang; TANG Lin; ZHU Jiangfeng; TIAN Siquan; XU Liuxiong

    2016-01-01

    It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna (Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show (1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters; (2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase (r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore.

  17. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  18. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  19. Bayesian Visual Odometry

    Science.gov (United States)

    Center, Julian L.; Knuth, Kevin H.

    2011-03-01

    Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.

  20. Partitioned semi-implicit methods for simulation of biomechanical fluid-structure interaction problems

    Science.gov (United States)

    Naseri, A.; Lehmkuhl, O.; Gonzalez, I.; Oliva, A.

    2016-09-01

    This paper represents numerical simulation of fluid-structure interaction (FSI) system involving an incompressible viscous fluid and a lightweight elastic structure. We follow a semi-implicit approach in which we implicitly couple the added-mass term (pressure stress) of the fluid to the structure, while other terms are coupled explicitly. This significantly reduces the computational cost of the simulations while showing adequate stability. Several coupling schemes are tested including fixed-point method with different static and dynamic relaxation, as well as Newton-Krylov method with approximated Jacobian. Numerical tests are conducted in the context of a biomechanical problem. Results indicate that the Newton-Krylov solver outperforms fixed point ones while introducing more complexity to the problem due to the evaluation of the Jacobian. Fixed-point solver with Aitken's relaxation method also proved to be a simple, yet efficient method for FSI simulations.

  1. Bayesian Theory

    CERN Document Server

    Bernardo, Jose M

    2000-01-01

    This highly acclaimed text, now available in paperback, provides a thorough account of key concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory. Information-theoretic concepts play a central role in the development of the theory, which provides, in particular, a detailed discussion of the problem of specification of so-called prior ignorance . The work is written from the authors s committed Bayesian perspective, but an overview of non-Bayesian theories is also provided, and each chapter contains a wide-ranging critica

  2. Towards an SDP-based Approach to Spectral Methods: A Nearly-Linear-Time Algorithm for Graph Partitioning and Decomposition

    CERN Document Server

    Orecchia, Lorenzo

    2010-01-01

    In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \\in (0,1/2]$ and a target conductance value $\\gamma \\in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \\gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\\gamma.$ Spielman and Teng gave an $\\tilde{O}(|E|/\\gamma^{2})$-time algorithm for $f= \\sqrt{\\gamma \\log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\\tilde{O}(|E|/\\gamma)$ for $f=\\sqrt{\\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first conside...

  3. K-balance partitioning: an exact method with applications to generalized structural balance and other psychological contexts.

    Science.gov (United States)

    Brusco, Michael; Steinley, Douglas

    2010-06-01

    Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into K clusters. This K-balance partitioning problem also has more general psychological applications associated with the analysis of similarity/dissimilarity relationships among stimuli. Accordingly, K-balance partitioning can be gainfully used in a wide variety of SBT applications, such as attraction and child development, evaluation of group membership, marketing and consumer issues, and other psychological contexts not necessarily related to SBT. We present a branch-and-bound algorithm for the K-balance partitioning problem. This new algorithm is applied to 2 synthetic numerical examples as well as to several real-world data sets from the behavioral sciences literature.

  4. Parallel implementation of electronic structure eigensolver using a partitioned folded spectrum method

    CERN Document Server

    Briggs, E L; Bernholc, J

    2015-01-01

    A parallel implementation of an eigensolver designed for electronic structure calculations is presented. The method is applicable to computational tasks that solve a sequence of eigenvalue problems where the solution for a particular iteration is similar but not identical to the solution from the previous iteration. Such problems occur frequently when performing electronic structure calculations in which the eigenvectors are solutions to the Kohn-Sham equations. The eigenvectors are represented in some type of basis but the problem sizes are normally too large for direct diagonalization in that basis. Instead a subspace diagonalization procedure is employed in which matrix elements of the Hamiltonian operator are generated and the eigenvalues and eigenvectors of the resulting reduced matrix are obtained using a standard eigensolver from a package such as LAPACK or SCALAPACK. While this method works well and is widely used, the standard eigensolvers scale poorly on massively parallel computer systems for the m...

  5. Distributed Memory Compiler Methods for Irregular Problems - Data Copy Reuse and Runtime Partitioning

    Science.gov (United States)

    1991-09-01

    addition, support for Saltz was provided by NSF from NSF Grant ASC-8819374. i 1, introduction Over the past fewyers, ,we have devoped -methods needed to...Rice. Domain decomposer: A software tool for mapping pde computations to parallel architectures. Report CSD-TR-1025, Purdue University, Computer...Hypercubes, Conurrent Computers and Applications, pages 553-560, 1989. [37] Y. Saad. Sparsekit: a basic tool kit for sparse matrix computations. Report 90

  6. A Bayesian Network Method for Quantitative Evaluation of Defects in Multilayered Structures from Eddy Current NDT Signals

    Directory of Open Access Journals (Sweden)

    Bo Ye

    2014-01-01

    Full Text Available Accurate evaluation and characterization of defects in multilayered structures from eddy current nondestructive testing (NDT signals are a difficult inverse problem. There is scope for improving the current methods used for solving the inverse problem by incorporating information of uncertainty in the inspection process. Here, we propose to evaluate defects quantitatively from eddy current NDT signals using Bayesian networks (BNs. BNs are a useful method in handling uncertainty in the inspection process, eventually leading to the more accurate results. The domain knowledge and the experimental data are used to generate the BN models. The models are applied to predict the signals corresponding to different defect characteristic parameters or to estimate defect characteristic parameters from eddy current signals in real time. Finally, the estimation results are analyzed. Compared to the least squares regression method, BNs are more robust with higher accuracy and have the advantage of being a bidirectional inferential mechanism. This approach allows results to be obtained in the form of full marginal conditional probability distributions, providing more information on the defect. The feasibility of BNs presented and discussed in this paper has been validated.

  7. A BAYESIAN METHOD FOR THE ANALYSIS OF THE DUST EMISSION IN THE FAR-INFRARED AND SUBMILLIMETER

    Energy Technology Data Exchange (ETDEWEB)

    Veneziani, M.; Noriega-Crespo, A.; Carey, S.; Paladini, R. [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Piacentini, F. [Dipartimento di Fisica, Universita di Roma ' ' La Sapienza' ' , I-00185 Rome (Italy); Paradis, D., E-mail: marcella.veneziani@ipac.caltech.edu [Universite de Toulouse, UPS-OMP, IRAP, F-31062 Toulouse (France)

    2013-07-20

    We present a method, based on Bayesian statistics, to fit the dust emission parameters in the far-infrared and submillimeter wavelengths. The method estimates the dust temperature and spectral emissivity index, plus their relationship, properly taking into account the statistical and systematic uncertainties. We test it on three sets of simulated sources detectable by the Herschel Space Observatory in the PACS and SPIRE spectral bands (70-500 {mu}m), spanning over a wide range of dust temperatures. The simulated observations are a one-component interstellar medium and two two-component sources, both warm (H II regions) and cold (cold clumps (CCs)). We first define a procedure to identify the better model, then we recover the parameters of the model and measure their physical correlations by means of a Markov Chain Monte Carlo algorithm adopting multi-variate Gaussian priors. In this process, we assess the reliability of the model recovery and of parameter estimation. We conclude that the model and parameters are properly recovered only under certain circumstances and that false models may be derived in some cases. We applied the method to a set of 91 starless CCs in an interarm region of the Galactic plane with low star formation activity, observed by Herschel in the Hi-GAL survey. Our results are consistent with a temperature-independent spectral index.

  8. Bayesian SPLDA

    OpenAIRE

    Villalba, Jesús

    2015-01-01

    In this document we are going to derive the equations needed to implement a Variational Bayes estimation of the parameters of the simplified probabilistic linear discriminant analysis (SPLDA) model. This can be used to adapt SPLDA from one database to another with few development data or to implement the fully Bayesian recipe. Our approach is similar to Bishop's VB PPCA.

  9. A new high-throughput method utilizing porous silica-based nano-composites for the determination of partition coefficients of drug candidates.

    Science.gov (United States)

    Yu, Chih H; Tam, Kin; Tsang, Shik C

    2011-09-01

    We show that highly porous silica-based nanoparticles prepared via micro-emulsion and sol-gel techniques are stable colloids in aqueous solution. By incorporating a magnetic core into the porous silica nano-composite, it is found that the material can be rapidly separated (precipitated) upon exposure to an external magnetic field. Alternatively, the porous silica nanoparticles without magnetic cores can be equally separated from solution by applying a high-speed centrifugation. Using these silica-based nanostructures a new high-throughput method for the determination of partition coefficient for water/n-octanol is hereby described. First, a tiny quantity of n-octanol phase is pre-absorbed in the porous silica nano-composite colloids, which allows an establishment of interface at nano-scale between the adsorbed n-octanol with the bulk aqueous phase. Organic compounds added to the mixture can therefore undergo a rapid partition between the two phases. The concentration of drug compound in the supernatant in a small vial can be determined by UV-visible absorption spectroscopy. With the adaptation of a robotic liquid handler, a high-throughput technology for the determination of partition coefficients of drug candidates can be employed for drug screening in the industry based on these nano-separation skills. The experimental results clearly suggest that this new method can provide partition coefficient values of potential drug candidates comparable to the conventional shake-flask method but requires much shorter analytical time and lesser quantity of chemicals.

  10. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  11. A multivariate nonlinear mixed effects method for analyzing energy partitioning in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Danfær, Allan Christian; Chwalibog, André

    2010-01-01

    Simultaneous equations have become increasingly popular for describing the effects of nutrition on the utilization of ME for protein (PD) and lipid deposition (LD) in animals. The study developed a multivariate nonlinear mixed effects (MNLME) framework and compared it with an alternative method...... for estimating parameters in simultaneous equations that described energy metabolism in growing pigs, and then proposed new PD and LD equations. The general statistical framework was implemented in the NLMIXED procedure in SAS. Alternative PD and LD equations were also developed, which assumed...... that the instantaneous response curve of an animal to varying energy supply followed the law of diminishing returns behavior. The Michaelis-Menten function was adopted to represent a biological relationship in which the affinity constant (k) represented the sensitivity of PD to ME above maintenance. The approach...

  12. On free fermions and plane partitions

    CERN Document Server

    Foda, O; Zuparic, M

    2008-01-01

    We use free fermion methods to re-derive a result of Okounkov and Reshetikhin relating charged fermions to random plane partitions, and to extend it to relate neutral fermions to strict plane partitions.

  13. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...

  14. Sentiment analysis. An example of application and evaluation of RID dictionary and Bayesian classification methods in qualitative data analysis approach

    Directory of Open Access Journals (Sweden)

    Krzysztof Tomanek

    2014-05-01

    Full Text Available The purpose of this article is to present the basic methods for classifying text data. These methods make use of achievements earned in areas such as: natural language processing, the analysis of unstructured data. I introduce and compare two analytical techniques applied to text data. The first analysis makes use of thematic vocabulary tool (sentiment analysis. The second technique uses the idea of Bayesian classification and applies, so-called, naive Bayes algorithm. My comparison goes towards grading the efficiency of use of these two analytical techniques. I emphasize solutions that are to be used to build dictionary accurate for the task of text classification. Then, I compare supervised classification to automated unsupervised analysis’ effectiveness. These results reinforce the conclusion that a dictionary which has received good evaluation as a tool for classification should be subjected to review and modification procedures if is to be applied to new empirical material. Adaptation procedures used for analytical dictionary become, in my proposed approach, the basic step in the methodology of textual data analysis.

  15. Variable volume loading method: a convenient and rapid method for measuring the initial emittable concentration and partition coefficient of formaldehyde and other aldehydes in building materials.

    Science.gov (United States)

    Xiong, Jianyin; Yan, Wei; Zhang, Yinping

    2011-12-01

    The initial emittable formaldehyde and VOC concentration in building materials (C(0)) is a key parameter for characterizing and classifying these materials. Various methods have been developed to measure this parameter, but these generally require a long test time. In this paper we develop a convenient and rapid method, the variable volume loading (VVL) method, to simultaneously measure C(0) and the material/air partition coefficient (K). This method has the following features: (a) it requires a relatively short experimental time (less than 24 h for the cases studied); and (b) is convenient for routine measurement. Using this method, we determined C(0) and K of formaldehyde, propanal and hexanal in one kind of medium density fiberboard, and repeated experiments were performed to reduce measurement error. In addition, an extended-C-history method is proposed to determine the diffusion coefficient and the convective mass transfer coefficient. The VVL method is validated by comparing model predicted results based on the determined parameters with experimental data. The determined C(0) of formaldehyde obtained via this method is less than 10% of the total concentration using the perforator method recommended by the Chinese National Standard, suggesting that the total concentration may not be appropriate to predict emission characteristics, nor for material classification.

  16. Partition density functional theory

    Science.gov (United States)

    Nafziger, Jonathan

    Partition density functional theory (PDFT) is a method for dividing a molecular electronic structure calculation into fragment calculations. The molecular density and energy corresponding to Kohn Sham density-functional theory (KS-DFT) may be exactly recovered from these fragments. Each fragment acts as an isolated system except for the influence of a global one-body 'partition' potential which deforms the fragment densities. In this work, the developments of PDFT are put into the context of other fragment-based density functional methods. We developed three numerical implementations of PDFT: One within the NWChem computational chemistry package using basis sets, and the other two developed from scratch using real-space grids. It is shown that all three of these programs can exactly reproduce a KS-DFT calculation via fragment calculations. The first of our in-house codes handles non-interacting electrons in arbitrary one-dimensional potentials with any number of fragments. This code is used to explore how the exact partition potential changes for different partitionings of the same system and also to study features which determine which systems yield non-integer PDFT occupations and which systems are locked into integer PDFT occupations. The second in-house code, CADMium, performs real-space calculations of diatomic molecules. Features of the exact partition potential are studied for a variety of cases and an analytical formula determining singularities in the partition potential is derived. We introduce an approximation for the non-additive kinetic energy and show how this quantity can be computed exactly. Finally a PDFT functional is developed to address the issues of static correlation and delocalization errors in approximations within DFT. The functional is applied to the dissociation of H2 + and H2.

  17. Bayesian Approach for Inconsistent Information.

    Science.gov (United States)

    Stein, M; Beer, M; Kreinovich, V

    2013-10-01

    In engineering situations, we usually have a large amount of prior knowledge that needs to be taken into account when processing data. Traditionally, the Bayesian approach is used to process data in the presence of prior knowledge. Sometimes, when we apply the traditional Bayesian techniques to engineering data, we get inconsistencies between the data and prior knowledge. These inconsistencies are usually caused by the fact that in the traditional approach, we assume that we know the exact sample values, that the prior distribution is exactly known, etc. In reality, the data is imprecise due to measurement errors, the prior knowledge is only approximately known, etc. So, a natural way to deal with the seemingly inconsistent information is to take this imprecision into account in the Bayesian approach - e.g., by using fuzzy techniques. In this paper, we describe several possible scenarios for fuzzifying the Bayesian approach. Particular attention is paid to the interaction between the estimated imprecise parameters. In this paper, to implement the corresponding fuzzy versions of the Bayesian formulas, we use straightforward computations of the related expression - which makes our computations reasonably time-consuming. Computations in the traditional (non-fuzzy) Bayesian approach are much faster - because they use algorithmically efficient reformulations of the Bayesian formulas. We expect that similar reformulations of the fuzzy Bayesian formulas will also drastically decrease the computation time and thus, enhance the practical use of the proposed methods.

  18. How to combine correlated data sets -- A Bayesian hyperparameter matrix method

    CERN Document Server

    Ma, Yin-Zhe

    2013-01-01

    We construct a statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by \\cite{Lahav00} and \\cite{Hobson02} which was designed to combine independent data sets. The hyperparameter matrix method we present here includes the relevant weights of multiple data sets and mutual correlations, and when the hyperparameters are marginalized over, the parameters of interest are recovered. We define a new "element-wise" product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying a classic model of fitting a straight line to two sets of data. We show that the hyperparameter matrix ...

  19. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...... of producing bias field correction results comparable to those of N3 in less computation time....

  20. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    Science.gov (United States)

    Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M.; Tempany, Clare M.; Cormack, Robert A.

    2012-01-01

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed

  1. Extraction of Active Regions and Coronal Holes from EUV Images Using the Unsupervised Segmentation Method in the Bayesian Framework

    CERN Document Server

    Arish, Saeid; Safari, Hossein; Amiri, Ali

    2016-01-01

    The solar corona is the origin of very dynamic events that are mostly produced in active regions (AR) and coronal holes (CH). The exact location of these large-scale features can be determined by applying image-processing approaches to extreme-ultraviolet (EUV) data. We here investigate the problem of segmentation of solar EUV images into ARs, CHs, and quiet-Sun (QS) images in a firm Bayesian way. On the basis of Bayes' rule, we need to obtain both prior and likelihood models. To find the prior model of an image, we used a Potts model in non-local mode. To construct the likelihood model, we combined a mixture of a Markov-Gauss model and non-local means. After estimating labels and hyperparameters with the Gibbs estimator, cellular learning automata were employed to determine the label of each pixel. We applied the proposed method to a Solar Dynamics Observatory/ Atmospheric Imaging Assembly (SDO/AIA) dataset recorded during 2011 and found that the mean value of the filling factor of ARs is 0.032 and 0.057 for...

  2. Bayesian signaling

    OpenAIRE

    Hedlund, Jonas

    2014-01-01

    This paper introduces private sender information into a sender-receiver game of Bayesian persuasion with monotonic sender preferences. I derive properties of increasing differences related to the precision of signals and use these to fully characterize the set of equilibria robust to the intuitive criterion. In particular, all such equilibria are either separating, i.e., the sender's choice of signal reveals his private information to the receiver, or fully disclosing, i.e., the outcome of th...

  3. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  4. How to combine correlated data sets-A Bayesian hyperparameter matrix method

    Science.gov (United States)

    Ma, Y.-Z.; Berndsen, A.

    2014-07-01

    We construct a “hyperparameter matrix” statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by Lahav et al. (2000) and Hobson et al. (2002) which was designed to combine independent data sets. The advantage of our method is to treat correlations between multiple data sets and gives appropriate relevant weights of multiple data sets with mutual correlations. We define a new “element-wise” product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying it to a demonstrative toy model of fitting a straight line to two sets of data. We show that the hyperparameter matrix method can detect unaccounted systematic errors or underestimated errors in the data sets. Additionally, the ratio of Bayes' factors provides a distinct indicator of the necessity of including hyperparameters. Our example shows that the likelihood we construct for joint analyses of correlated data sets can be widely applied to many astrophysical systems.

  5. Evaluation of a simple method for crop evapotranspiration partitioning and comparison of different water use efficiency approaches

    Science.gov (United States)

    Tallec, T.; Rivalland, V.; Jarosz, N.; Boulet, G.; Gentine, P.; Ceschia, E.

    2012-04-01

    In the current context of climate change, intra- and inter-annual variability of precipitation can lead to major modifications of water budgets and water use efficiencies (WUE). Obtaining greater insight into how climatic variability and agricultural practices affect water budgets and their components in croplands is, thus, important for adapting crop management and limiting water losses. The principal aims of this study were 1) to assess the contribution of different components to the agro-ecosystem water budget and 2) to analyze and compare the WUE calculated from ecophysiological (WUEplt), environmental (WUEeco) and agronomical (WUEagro) points of view for various crops during the growing season and for the annual time scale. Eddy covariance (EC) measurements of CO2 and water flux were performed on winter wheat, maize and sunflower crops at two sites in southwest France: Auradé and Lamasquère. To infer WUEplt, an estimation of plant transpiration (TR) is needed. We then tested a new method for partitioning evapotranspiration (ETR), measured by means of the EC method, into soil evaporation (E) and plant transpiration (TR) based on marginal distribution sampling (MDS). We compared these estimations with calibrated simulations of the ICARE-SVAT double source mechanistic model. The two partitioning methods showed good agreement, demonstrating that MDS is a convenient, simple and robust tool for estimating E with reasonable associated uncertainties. During the growing season, the proportion of E in ETR was approximately one-third and varied mainly with crop leaf area. When calculated on an annual time scale, the proportion of E in ETR reached more than 50%, depending on crop leaf area and the duration and distribution of bare soil within the year. WUEplt values ranged between -4.1 and -5.6 g C kg-1 H2O for maize and winter wheat, respectively, and were strongly dependent on meteorological conditions at the half-hourly, daily and seasonal time scales. When

  6. A Bayesian multilocus association method: allowing for higher-order interaction in association studies

    DEFF Research Database (Denmark)

    Albrechtsen, Anders; Castella, Sofie; Andersen, Gitte;

    2007-01-01

    conditions. We present a new powerful statistical model for analyzing and interpreting genomic data that influence multifactorial phenotypic traits with a complex and likely polygenic inheritance. The new method is based on Markov chain Monte Carlo (MCMC) and allows for identification of sets of SNPs...... and environmental factors that when combined increase disease risk or change the distribution of a quantitative trait. Using simulations, we show that the MCMC method can detect disease association when multiple, interacting SNPs are present in the data. When applying the method on real large-scale data from...

  7. Overview of methods of reverse engineering of gene regulatory networks: Boolean and Bayesian networks

    OpenAIRE

    Frolova A. O.

    2012-01-01

    Reverse engineering of gene regulatory networks is an intensively studied topic in Systems Biology as it reconstructs regulatory interactions between all genes in the genome in the most complete form. The extreme computational complexity of this problem and lack of thorough reviews on reconstruction methods of gene regulatory network is a significant obstacle to further development of this area. In this article the two most common methods for modeling gene regulatory networks are surveyed: Bo...

  8. 大电网恢复适应性分区方法%Adaptive partition method for large power system restoration

    Institute of Scientific and Technical Information of China (English)

    刘玉田; 谭冰雪

    2013-01-01

      针对大电网并行恢复分区问题,提出了一种适应性分区方法。以复杂网络中的社团划分理论为基础,将分区过程分为凝聚与分裂两个阶段。凝聚阶段主要处理黑启动过程对分区的特殊要求,将黑启动电源与附近重要发电厂及恢复节点聚合,形成初始子系统;分裂阶段引入基于边的聚合度及介数的联络度指标,据此寻找连接各分区的关键线路并断开,最终形成多个恢复分区。该方法不仅能形成可行、可靠的恢复分区,保证各分区内部有逐步恢复供电的能力,而且考虑到实际调度中的分层、分区控制原则,能够减小恢复分区与实际调度的冲突。山东电网仿真结果表明,所提方法能较好处理大电网停电后的恢复分区问题,且能在地区层级上指导大电网中黑启动机组的配置。%Reasonable restoration subsystem partitioning can increase the restoration efficiency and ensure the reliability. Therefore, it is necessary to analyze the characteristics of power system restoration problem and improve restoration subsystem partition method. Based on the community structure in complex network theory, a new dynamic and adaptive partition method is proposed to solve the subsystem partition problem in power system restoration. The proposed method divides the partition process into agglomerate stage and split stage. The agglomerate stage deals with some special requirements in black start stage and the following split stage finds the tie-line among subsystems. Simulation of Shandong Power Grid shows that the partition scheme formed by this method is feasible and reliable, which complies with the power system operation principle of layering and zoning. Besides, the proposed method can help with the optimal allocation of black start unit.

  9. Orbits for the Impatient: A Bayesian Rejection Sampling Method for Quickly Fitting the Orbits of Long-Period Exoplanets

    Science.gov (United States)

    Blunt, Sarah Caroline; Nielsen, Eric; De Rosa, Robert J.; Konopacky, Quinn M.; Ryan, Dominic; Wang, Jason; Pueyo, Laurent; Rameau, Julien; Marois, Christian; Marchis, Franck; Macintosh, Bruce; Graham, James R.; GPIES Collaboration

    2017-01-01

    Direct imaging planet-finders like the Gemini Planet Imager (GPI) allow for direct imaging of exoplanets with orbital periods beyond ~10 years that are still close enough to their host stars to undergo detectable orbital motion on year or multi-year timescales, creating a need for methods that rapidly characterize newly discovered planets using relative astrometry covering a short fraction of an orbital period. We address this problem with Orbits for the Impatient (OFTI), a statistically robust and computationally efficient Bayesian rejection sampling method for fitting orbits to astrometric datasets covering small orbital fractions from directly imaged exoplanets, brown dwarfs, and wide-orbit stellar binaries. We demonstrate that OFTI produces valid orbital solutions by directly comparing its outputs with those of two Markov Chain Monte Carlo (MCMC) implementations, and compare the computational speeds of OFTI and MCMC as a function of orbital fraction spanned by input astrometry. We find that for well-sampled orbits with astrometry covering less than 15% of the total orbital period, OFTI converges on the correct orbital solution in orders of magnitude less CPU time than MCMC. Exoplanet observations with space missions such as the WFIRST coronagraph present a similar problem of sparse sampling, and we show how these methods can efficiently constrain the orbital inclination, phase, and separation of a planet such as 47 Uma c. Finally, we present some of the first orbital fits to astrometry from directly imaged exoplanets and brown dwarfs in the literature, including GJ 504 b, CD-35 2722 B, kappa And b, and HR 3549 B.

  10. Comparison of Three Statistical Downscaling Methods and Ensemble Downscaling Method Based on Bayesian Model Averaging in Upper Hanjiang River Basin, China

    Directory of Open Access Journals (Sweden)

    Jiaming Liu

    2016-01-01

    Full Text Available Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables to assess the hydrological impacts of climate change. To improve the simulation accuracy of downscaling methods, the Bayesian Model Averaging (BMA method combined with three statistical downscaling methods, which are support vector machine (SVM, BCC/RCG-Weather Generators (BCC/RCG-WG, and Statistics Downscaling Model (SDSM, is proposed in this study, based on the statistical relationship between the larger scale climate predictors and observed precipitation in upper Hanjiang River Basin (HRB. The statistical analysis of three performance criteria (the Nash-Sutcliffe coefficient of efficiency, the coefficient of correlation, and the relative error shows that the performance of ensemble downscaling method based on BMA for rainfall is better than that of each single statistical downscaling method. Moreover, the performance for the runoff modelled by the SWAT rainfall-runoff model using the downscaled daily rainfall by four methods is also compared, and the ensemble downscaling method has better simulation accuracy. The ensemble downscaling technology based on BMA can provide scientific basis for the study of runoff response to climate change.

  11. Optimal and scalable methods to approximate the solutions of large-scale Bayesian problems: Theory and application to atmospheric inversions and data assimilation

    CERN Document Server

    Bousserez, Nicolas

    2016-01-01

    This paper provides a detailed theoretical analysis of methods to approximate the solutions of high-dimensional (>10^6) linear Bayesian problems. An optimal low-rank projection that maximizes the information content of the Bayesian inversion is proposed and efficiently constructed using a scalable randomized SVD algorithm. Useful optimality results are established for the associated posterior error covariance matrix and posterior mean approximations, which are further investigated in a numerical experiment consisting of a large-scale atmospheric tracer transport source-inversion problem. This method proves to be a robust and efficient approach to dimension reduction, as well as a natural framework to analyze the information content of the inversion. Possible extensions of this approach to the non-linear framework in the context of operational numerical weather forecast data assimilation systems based on the incremental 4D-Var technique are also discussed, and a detailed implementation of a new Randomized Incr...

  12. Single-cycle method for partitioning of trivalent actinides using completely incinerable reagents from nitric acid medium

    Energy Technology Data Exchange (ETDEWEB)

    Ravi, Jammu; Venkatesan, K.A.; Antony, M.P.; Srinivasan, T.G.; Rao, P.R. Vasudeva [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Fuel Chemistry Div.

    2014-10-01

    A new approach, namely 'Single-cycle method for partitioning of Minor Actinides using completely incinerable ReagenTs' (SMART), has been explored for the separation of Am(III) from Eu(III) present in nitric acid medium. The extraction behavior of Am(III) and Eu(III) in a solution of an unsymmetrical diglycolamide, N,N,-didodecyl-N',N'-dioctyl-3-oxapentane-1,5-diamide (D{sup 3}DODGA), and an acidic extractant, N,N-di-2-ethylhexyl diglycolamic acid (HDEHDGA), in n-dodecane was studied. The distribution ratio of both these metal ions in D{sup 3}DODGA-HDEHDGA/n-dodecane initially decreased with increase in the concentration of nitric acid reached a minimum at 0.1 M nitric acid followed by increase. Synergic extraction of Am(III) and Eu(III) was observed at nitric acid concentrations above 0.1 M and antagonism at lower acidities. Contrasting behavior observed at different acidities was probed by the slope analysis of the extraction data. The study revealed the involvement of both D{sup 3}DODGA and HDEHDGA during synergism and increased participation of HDEHDGA during antagonism. The stripping behavior of Am(III) and Eu(III) from the loaded organic phase was studied as a function of nitric acid, DTPA, and citric acid concentrations. The conditions needed for the mutual separation of Am(III) and Eu(III) from the loaded organic phase were optimized. Our studies revealed the possibility of separating trivalent actinides from HLLW using these completely incinerable reagents. (orig.)

  13. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    Energy Technology Data Exchange (ETDEWEB)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  14. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA) methods

    OpenAIRE

    Owens Chantelle J; Owusu-Edusei Kwame

    2009-01-01

    Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, g...

  15. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... an overall estimate of the causal relationship between the phenotype and the outcome, and an assessment of its heterogeneity across studies. As an example, we estimate the causal relationship of blood concentrations of C-reactive protein on fibrinogen levels using data from 11 studies. These methods provide...... a flexible framework for efficient estimation of causal relationships derived from multiple studies. Issues discussed include weak instrument bias, analysis of binary outcome data such as disease risk, missing genetic data, and the use of haplotypes....

  16. Development of a Bayesian method for the analysis of inertial confinement fusion experiments on the NIF

    CERN Document Server

    Gaffney, Jim A; Sonnad, Vijay; Libby, Stephen B

    2013-01-01

    The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters that are only known with limited reliability. These parameters, combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating underlying microphysics models. The result is framed as a modified $\\chi^{2}$ analysis which is easy to implement in existing analyses, and quite portable. We present a first application to a recent convergent ablator experiment performed at the NIF, and investigate the effect of variations in all physical dimensions of the target (very difficult to do using other methods). We show that for well characterised targets in which dimensions vary at the 0.5% level there is little effect, but 3% variations change the results of i...

  17. Nonlinear tracking in a diffusion process with a Bayesian filter and the finite element method

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Thygesen, Uffe Høgsbro; Madsen, Henrik

    2011-01-01

    A new approach to nonlinear state estimation and object tracking from indirect observations of a continuous time process is examined. Stochastic differential equations (SDEs) are employed to model the dynamics of the unobservable state. Tracking problems in the plane subject to boundaries...... become complicated using SMC because Monte Carlo randomness is introduced. The finite element (FE) method solves the Kolmogorov equations of the SDE numerically on a triangular unstructured mesh for which boundary conditions to the state-space are simple to incorporate. The FE approach to nonlinear state...... estimation is suited for off-line data analysis because the computed smoothed state densities, maximum a posteriori parameter estimates and state sequence are deterministic conditional on the finite element mesh and the observations. The proposed method is conceptually similar to existing point...

  18. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Thompson, Grahame

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... of multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable...

  19. STATISTICAL BAYESIAN ANALYSIS OF EXPERIMENTAL DATA.

    Directory of Open Access Journals (Sweden)

    AHLAM LABDAOUI

    2012-12-01

    Full Text Available The Bayesian researcher should know the basic ideas underlying Bayesian methodology and the computational tools used in modern Bayesian econometrics.  Some of the most important methods of posterior simulation are Monte Carlo integration, importance sampling, Gibbs sampling and the Metropolis- Hastings algorithm. The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We focus primarily on recent developments in Bayesian computation. Then we focus on particular models. Inevitably, we combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics. We propose to the user of analysis of variance and linear regression model.

  20. A Bayesian method for identifying missing enzymes in predicted metabolic pathway databases

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2004-06-01

    Full Text Available Abstract Background The PathoLogic program constructs Pathway/Genome databases by using a genome's annotation to predict the set of metabolic pathways present in an organism. PathoLogic determines the set of reactions composing those pathways from the enzymes annotated in the organism's genome. Most annotation efforts fail to assign function to 40–60% of sequences. In addition, large numbers of sequences may have non-specific annotations (e.g., thiolase family protein. Pathway holes occur when a genome appears to lack the enzymes needed to catalyze reactions in a pathway. If a protein has not been assigned a specific function during the annotation process, any reaction catalyzed by that protein will appear as a missing enzyme or pathway hole in a Pathway/Genome database. Results We have developed a method that efficiently combines homology and pathway-based evidence to identify candidates for filling pathway holes in Pathway/Genome databases. Our program not only identifies potential candidate sequences for pathway holes, but combines data from multiple, heterogeneous sources to assess the likelihood that a candidate has the required function. Our algorithm emulates the manual sequence annotation process, considering not only evidence from homology searches, but also considering evidence from genomic context (i.e., is the gene part of an operon? and functional context (e.g., are there functionally-related genes nearby in the genome? to determine the posterior belief that a candidate has the required function. The method can be applied across an entire metabolic pathway network and is generally applicable to any pathway database. The program uses a set of sequences encoding the required activity in other genomes to identify candidate proteins in the genome of interest, and then evaluates each candidate by using a simple Bayes classifier to determine the probability that the candidate has the desired function. We achieved 71% precision at a

  1. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  2. Subliminal or not? Comparing null-hypothesis and Bayesian methods for testing subliminal priming.

    Science.gov (United States)

    Sand, Anders; Nilsson, Mats E

    2016-08-01

    A difficulty for reports of subliminal priming is demonstrating that participants who actually perceived the prime are not driving the priming effects. There are two conventional methods for testing this. One is to test whether a direct measure of stimulus perception is not significantly above chance on a group level. The other is to use regression to test if an indirect measure of stimulus processing is significantly above zero when the direct measure is at chance. Here we simulated samples in which we assumed that only participants who perceived the primes were primed by it. Conventional analyses applied to these samples had a very large error rate of falsely supporting subliminal priming. Calculating a Bayes factor for the samples very seldom falsely supported subliminal priming. We conclude that conventional tests are not reliable diagnostics of subliminal priming. Instead, we recommend that experimenters calculate a Bayes factor when investigating subliminal priming.

  3. Calculation Method for Horizontal Partition Coefficient of Simply Supported T -shaped beam%简支T梁横向分配系数计算方法

    Institute of Scientific and Technical Information of China (English)

    孙立刚

    2012-01-01

    On account of simply supported T - shaped beam bridge, the horizontal partition coefficient is cal- culated with G- M method, rigid cross beam method and rigid connected beam method and suitable methods are summarized, with certain reference value for design.%针对简支T型梁桥,采用G-M法、刚性横梁法、刚接梁法计算横向分配系数,总结了合适的计算方法,对设计工作具有一定的参考价值。

  4. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA) methods

    Science.gov (United States)

    Owusu-Edusei, Kwame; Owens, Chantelle J

    2009-01-01

    Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686

  5. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  6. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational–rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    Energy Technology Data Exchange (ETDEWEB)

    Mielke, Steven L., E-mail: slmielke@gmail.com, E-mail: truhlar@umn.edu; Truhlar, Donald G., E-mail: slmielke@gmail.com, E-mail: truhlar@umn.edu [Department of Chemistry, Chemical Theory Center, and Supercomputing Institute, University of Minnesota, 207 Pleasant St. S.E., Minneapolis, Minnesota 55455-0431 (United States)

    2015-01-28

    We present an improved version of our “path-by-path” enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P{sup −6}) to O(P{sup −12}), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational–rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan–Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300–3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  7. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane.

    Science.gov (United States)

    Mielke, Steven L; Truhlar, Donald G

    2015-01-28

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P(-6)) to O(P(-12)), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  8. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms.

  9. Empirical Bayesian Method for the Estimation of Literacy Rate at Sub-district Level Case Study: Sumenep District of East Java Province

    Directory of Open Access Journals (Sweden)

    A.Tuti Rumiati

    2012-02-01

    Full Text Available This paper discusses Bayesian Method of Small Area Estimation (SAE based on Binomial response variable. SAE method being developed to estimate parameter in small area due to insufficiency of sample. The case study is literacy rate estimation at sub-district level in Sumenep district, East Java Province. Literacy rate is measured by proportion of people who are able to read and write, from the population of 10 year-old or more. In the case study we used Social Economic Survey (Susenasdata collected by BPS. The SAE approach was applied since the Susenas data is not representative enough to estimate the parameters at sub-district level because it’s designed to estimate parameters in regional area (in scope of a district/city at minimum. In this research, the response variable being used was logit function trasformation of pi (the parameter of Binomial distribution. We applied direct and indirect approach for parameter estimation, both using Empirical Bayes approach. For direct estimation we used prior distribution of Beta distribution and Normal prior distribution for logit function (pi and to estimate parameter by using numerical method, i.e integration Monte Carlo. For indirect approach, we used auxiliary variables which are combinations of sex and age (which is divided into five categories. Penalized Quasi Likelihood (PQL was used to get parameter estimation of SAE model and Restricted Maximum Likelihood method (REML for MSE estimation. Instead of Bayesian approach, we are also conducting direct estimation using classical approach in order to evaluate the quality of the estimators. This research gives some findings, those are: Bayesian approach for SAE model gives the best estimation because having the lowest MSE value compares to the other methods. For the direct estimation, Bayesian approach using Beta and logit Normal prior distribution give a very similar result to the direct estimation with classical approach since the weight of is too

  10. Application of TLSER method in predicting the aqueous solubility and n-octanol/water partition coefficient of PCBs, PCDDs and PCDFs

    Institute of Scientific and Technical Information of China (English)

    HUANG Jun; YU Gang; ZHANG Zu-lin; WANG Yi-lei; ZHU Wei-hua; WU Guo-shi

    2004-01-01

    The theoretical linear solvation energy relationship(TLSER) approach was adopted to predict the aqueous solubility and noctanol/water partition coefficient of three groups of environmentally important chemicals-polychlorinated biphenyls( PCBs), polychlorinated dibenzodioxins and dibenzofurans( PCDDs and PCDFs). For each compound, five quantum parameters were calculated using AMI semiempirical molecular orbital methods and used as structure descriptors: average molecular polarizability(α), energy of the lowest unoccupied molecular orbit( ELUMO ), energy of the highest occupied molecular orbit( EHOMO ), the most positive charge on a hydrogen atom ( q + ), and the most negative atomic partial charge( q_ ) in the solute molecule. Then standard independent variables in TLSER equation was extracted and two series of quantitative equations between these quantum parameters and aqueous solubility and n-octanol/water partition coefficient were obtained by stepwise multiple linear regression(MLR) method. The developed equations have both quite high accuracy and explicit meanings. And the cross-validation test illustrated the good predictive power and stability of the established models.The results showed that TLSER could be used as a promising approach in the estimation of partition and solubility properties ofmacromolecular chemicals, such as persistent organic pollutants.

  11. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  12. Bayesian Inference in Polling Technique: 1992 Presidential Polls.

    Science.gov (United States)

    Satake, Eiki

    1994-01-01

    Explores the potential utility of Bayesian statistical methods in determining the predictability of multiple polls. Compares Bayesian techniques to the classical statistical method employed by pollsters. Considers these questions in the context of the 1992 presidential elections. (HB)

  13. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    Science.gov (United States)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  14. Kullback-Leibler Divergence Approach to Partitioned Update Kalman Filter

    OpenAIRE

    Raitoharju, Matti; García-Fernández, Ángel F.; Piché, Robert

    2016-01-01

    Kalman filtering is a widely used framework for Bayesian estimation. The partitioned update Kalman filter applies a Kalman filter update in parts so that the most linear parts of measurements are applied first. In this paper, we generalize partitioned update Kalman filter, which requires the use oft the second order extended Kalman filter, so that it can be used with any Kalman filter extension. To do so, we use a Kullback-Leibler divergence approach to measure the nonlinearity of the measure...

  15. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    Science.gov (United States)

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  16. Strength Reduction Method for Stability Analysis of Local Discontinuous Rock Mass with Iterative Method of Partitioned Finite Element and Interface Boundary Element

    Directory of Open Access Journals (Sweden)

    Tongchun Li

    2015-01-01

    element is proposed to solve the safety factor of local discontinuous rock mass. Slope system is divided into several continuous bodies and local discontinuous interface boundaries. Each block is treated as a partition of the system and contacted by discontinuous joints. The displacements of blocks are chosen as basic variables and the rigid displacements in the centroid of blocks are chosen as motion variables. The contact forces on interface boundaries and the rigid displacements to the centroid of each body are chosen as mixed variables and solved iteratively using the interface boundary equations. Flexibility matrix is formed through PFE according to the contact states of nodal pairs and spring flexibility is used to reflect the influence of weak structural plane so that nonlinear iteration is only limited to the possible contact region. With cohesion and friction coefficient reduced gradually, the states of all nodal pairs at the open or slip state for the first time are regarded as failure criterion, which can decrease the effect of subjectivity in determining safety factor. Examples are used to verify the validity of the proposed method.

  17. Unique Path Partitions

    DEFF Research Database (Denmark)

    Bessenrodt, Christine; Olsson, Jørn Børling; Sellers, James A.

    2013-01-01

    We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions.......We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions....

  18. A cluster partitioning method determination of density matrices of solids and comparison with X-ray experiments

    CERN Document Server

    Ragot, S; Becker, P J; Ragot, Sebastien; Gillet, Jean-Michel; Becker, Pierre J

    2001-01-01

    In this paper we show that 1-electron properties such as Compton profiles and structure factors of crystals can be asymptotically retrieved through cluster-based calculations, followed by an appropriate partition of the 1-electron reduced density matrix (1RDM). This approach, conceptually simple, is checked with respects to both position and momentum spaces simultaneously for insulators and a covalent crystal. Restricting the calculations to small clusters further enables a fair description of local correlation effects in ionic compounds, which improves both Compton profiles and structure factors vs. their experimentally determined counterparts.

  19. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Based on the three-phase theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-phase theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compressional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium. Additionally, the three-phase theory is reduced to the Biot’s theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Biot’s equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  20. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    ZHAO HaiBo; WANG XiuMing

    2008-01-01

    Based on the three-pheee theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-pheee theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compres-sional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium.Additionally, the three-phase theory is reduced to the Blot's theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Blot's equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  1. Evaluation of Bayesian source estimation methods with Prairie Grass observations and Gaussian plume model: A comparison of likelihood functions and distance measures

    Science.gov (United States)

    Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko

    2017-03-01

    Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.

  2. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  3. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  4. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  5. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  6. A SAS Interface for Bayesian Analysis with WinBUGS

    Science.gov (United States)

    Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki

    2008-01-01

    Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…

  7. 基于云参数贝叶斯网络的威胁评估方法%An Threat Assessment Method Based on Cloud Parameters Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    王巍

    2016-01-01

    For the disadvantages of lacking sample data of threat assessment and large workload of experts building Bayesian network,a threat assessment method based on cloud parameters Bayesian network is proposed. The method combines cloud model expression ability with Bayesian network inference ability. On the one hand,the cloud expression ability is used to build a Bayesian network parameters,on the other hand,the Bayesian network inference ability is applied to calculate the posterior probability. First,it uses expert knowledge to generate membership cloud parameters with the media of state combination weight and converts membership cloud to conditional proba-bility tables by the uncertainty of state combination weights,so as to achieve the purpose to build the assessment model in less workload of experts. Then use of Bayesian network of threat assessment built by experts and conditional probability table generated for threat assess-ment reasoning,the final evaluation results are obtained. The experiment shows that this method is generated in line with the experts ex-pected,and can be effectively applied to threat assessment.%文中以威胁评估为背景,针对威胁评估中样本数据不充足,专家构建贝叶斯网络参数工作量大的问题,提出了基于云参数贝叶斯网络的威胁评估方法。把云的表达能力与贝叶斯网络的推理能力相结合,一是运用云的表达能力构建贝叶斯网络参数,二是运用贝叶斯网络的推理能力计算后验概率。首先,以状态组合权值为媒介运用专家知识构建隶属云模型,并利用状态组合权值的不确定度将隶属云模型转换为条件概率表,从而达到以较少的专家工作完成评估模型构建的目的;其次,运用专家构建的威胁评估贝叶斯网络和生成的条件概率表进行威胁评估推理,得到最终的评估结果。实验结果表明,该方法生成的条件概率表的统计数据与专家知识相符

  8. An Efficient Partitioning Method in Quadratic Placement%一种应用于二次布局的有效划分方法

    Institute of Scientific and Technical Information of China (English)

    吕勇强; 洪先龙; 侯文婷; 吴为民; 蔡懿慈

    2004-01-01

    A method of combining the MFFC clustering and hMETIS partitioning based quadratic placement algorithm is proposed. Experimental results show that it can gain good results but consume long running time.In order to cut down the running time,an improved MFFC clustering method (IMFFC) based Q-place algorithm is proposed.Comparing with the combining clustering and partitioning based method,it is much faster but with a little increase in total wire length.%提出了一种基于二次布局的结合MFFC结群和hMETIS划分的算法.实验表明:这种方法能得到很好的布局结果,但是运行消耗的时间比较长.为了缩短划分在二次布局中运行的时间,提出了一种改进的结群算法IMFFC,用它在二次布局中做划分.与前者相比较,这种方法虽然布局质量稍差,但速度更快.

  9. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  10. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  11. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    Science.gov (United States)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  12. New phiomorph rodents from the latest Eocene of Egypt, and the impact of Bayesian “clock”-based phylogenetic methods on estimates of basal hystricognath relationships and biochronology

    Directory of Open Access Journals (Sweden)

    Hesham M. Sallam

    2016-03-01

    Full Text Available The Fayum Depression of Egypt has yielded fossils of hystricognathous rodents from multiple Eocene and Oligocene horizons that range in age from ∼37 to ∼30 Ma and document several phases in the early evolution of crown Hystricognathi and one of its major subclades, Phiomorpha. Here we describe two new genera and species of basal phiomorphs, Birkamys korai and Mubhammys vadumensis, based on rostra and maxillary and mandibular remains from the terminal Eocene (∼34 Ma Fayum Locality 41 (L-41. Birkamys is the smallest known Paleogene hystricognath, has very simple molars, and, like derived Oligocene-to-Recent phiomorphs (but unlike contemporaneous and older taxa apparently retained dP4∕4 late into life, with no evidence for P4∕4 eruption or formation. Mubhammys is very similar in dental morphology to Birkamys, and also shows no evidence for P4∕4 formation or eruption, but is considerably larger. Though parsimony analysis with all characters equally weighted places Birkamys and Mubhammys as sister taxa of extant Thryonomys to the exclusion of much younger relatives of that genus, all other methods (standard Bayesian inference, Bayesian “tip-dating,” and parsimony analysis with scaled transitions between “fixed” and polymorphic states place these species in more basal positions within Hystricognathi, as sister taxa of Oligocene-to-Recent phiomorphs. We also employ tip-dating as a means for estimating the ages of early hystricognath-bearing localities, many of which are not well-constrained by geological, geochronological, or biostratigraphic evidence. By simultaneously taking into account phylogeny, evolutionary rates, and uniform priors that appropriately encompass the range of possible ages for fossil localities, dating of tips in this Bayesian framework allows paleontologists to move beyond vague and assumption-laden “stage of evolution” arguments in biochronology to provide relatively rigorous age assessments of poorly

  13. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  14. [A Simultaneous Determination Method with Acetonitrile-n-Hexane Partitioning and Solid-Phase Extraction for Pesticide Residues in Livestock and Marine Products by GC-MS].

    Science.gov (United States)

    Yoshizaki, Mayuko; Kobayashi, Yukari; Shimizu, Masanori; Maruyama, Kouichi

    2015-01-01

    A simultaneous determination method was examined for 312 pesticides (including isomers) in muscle of livestock and marine products by GC-MS. The pesticide residues extracted from samples with acetone and n-hexane were purified by acetonitrile-n-hexane partitioning, and C18 and SAX/PSA solid-phase extraction without using GPC. Matrix components such as cholesterol were effectively removed. In recovery tests performed by this method using pork, beef, chicken and shrimp, 237-257 pesticides showed recoveries within the range of 70-120% in each sample. Validity was confirmed for 214 of the target pesticides by means of a validation test using pork. In comparison with the Japanese official method using GPC, the treatment time of samples and the quantity of solvent were reduced substantially.

  15. Data Partitioning View of Mining Big Data

    OpenAIRE

    Zhang, Shichao

    2016-01-01

    There are two main approximations of mining big data in memory. One is to partition a big dataset to several subsets, so as to mine each subset in memory. By this way, global patterns can be obtained by synthesizing all local patterns discovered from these subsets. Another is the statistical sampling method. This indicates that data partitioning should be an important strategy for mining big data. This paper recalls our work on mining big data with a data partitioning and shows some interesti...

  16. [On the partition of acupuncture academic schools].

    Science.gov (United States)

    Yang, Pengyan; Luo, Xi; Xia, Youbing

    2016-05-01

    Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.

  17. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  18. Polymers as reference partitioning phase: polymer calibration for an analytically operational approach to quantify multimedia phase partitioning

    DEFF Research Database (Denmark)

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe;

    2016-01-01

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning......-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients...... as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic...

  19. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  20. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    Wetzels, R.; Grasman, R.P.P.P.; Wagenmakers, E.J.

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA desig

  1. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t

  2. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  3. Investigation of model based beamforming and Bayesian inversion signal processing methods for seismic localization of underground sources

    DEFF Research Database (Denmark)

    Oh, Geok Lian; Brunskog, Jonas

    2014-01-01

    as an optimization problem to estimate the unknown position variable using the described physical forward models. The proposed four methodologies are demonstrated and compared using seismic signals recorded by geophones set up on ground surface generated by a surface seismic excitation. The examples show...... that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model.......) elastic wave model to represent the received seismic signal. Two localization algorithms, beamforming and Bayesian inversion, are developed for each physical model. The beam-forming algorithms implemented are the modified time-and-delay beamformer and the F-K beamformer. Inversion is posed...

  4. Maximum margin Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  5. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  6. Three-phase partitioning as a rapid and easy method for the purification and recovery of catalase from sweet potato tubers (Solanum tuberosum).

    Science.gov (United States)

    Duman, Yonca Avcı; Kaya, Erdem

    2013-07-01

    Three-phase partitioning (TPP) was used to purify and recover catalase from potato crude extract. The method consists of ammonium sulfate saturation, t-butanol addition, and adjustment of pH, respectively. The best catalase recovery (262 %) and 14.1-fold purification were seen in the interfacial phase in the presence of 40 % (w/v) ammonium sulfate saturation with 1.0:1.0 crude extract/t-butanol ratio (v/v) at pH 7 in a single step. The sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) analysis of the enzyme showed comparatively purification and protein molecular weight was nearly found to be 56 kDa. This study shows that TPP is a simple, economical, and quick method for the recovering of catalase and can be used for the purification process.

  7. GPstuff: Bayesian Modeling with Gaussian Processes

    NARCIS (Netherlands)

    Vanhatalo, J.; Riihimaki, J.; Hartikainen, J.; Jylänki, P.P.; Tolvanen, V.; Vehtari, A.

    2013-01-01

    The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

  8. Combinatorial set theory partition relations for cardinals

    CERN Document Server

    Erdös, P; Hajnal, A; Rado, P

    2011-01-01

    This work presents the most important combinatorial ideas in partition calculus and discusses ordinary partition relations for cardinals without the assumption of the generalized continuum hypothesis. A separate section of the book describes the main partition symbols scattered in the literature. A chapter on the applications of the combinatorial methods in partition calculus includes a section on topology with Arhangel''skii''s famous result that a first countable compact Hausdorff space has cardinality, at most continuum. Several sections on set mappings are included as well as an account of

  9. Uncertainty reduction and characterization for complex environmental fate and transport models: An empirical Bayesian framework incorporating the stochastic response surface method

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2003-12-01

    In this work, a computationally efficient Bayesian framework for the reduction and characterization of parametric uncertainty in computationally demanding environmental 3-D numerical models has been developed. The framework is based on the combined application of the Stochastic Response Surface Method (SRSM, which generates accurate and computationally efficient statistically equivalent reduced models) and the Markov chain Monte Carlo method. The application selected to demonstrate this framework involves steady state groundwater flow at the U.S. Department of Energy Savannah River Site General Separations Area, modeled using the Subsurface Flow And Contaminant Transport (FACT) code. Input parameter uncertainty, based initially on expert opinion, was found to decrease in all variables of the posterior distribution. The joint posterior distribution obtained was then further used for the final uncertainty analysis of the stream base flows and well location hydraulic head values.

  10. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  11. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  12. 基于Bayes信息融合的人为差错概率计算方法%Human error probability quantification method based on Bayesian information fusion

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 谢红卫; 宫二玲

    2011-01-01

    研究了人为差错概率的计算.首先,介绍了可用于人为差错概率计算的数据来源,主要包括:通用数据、专家数据、仿真实验数据和现场数据.然后,分析了Bayes信息融合方法的基本思想,强调了该方法的两个关键性问题:验前分布的构建和融合权重的确定.最后,构建了基于Bayes信息融合的人为差错概率计算方法.将前3种数据作为脸前信息,融合形成验前分布.使用Bayes方法完成与现场数据的数据综合,得到人为差错概率的验后分布.基于该验后分布,完成人为差错概率的计算.通过示例分析,演示了方法的使用过程,证明了方法的有效性.%The quantification of human error probability is researched. Firstly, the data resources that can be used in the quantification of human error probability are introduced, including general data, expert data, simulation data, and spot data. Their characteristics are analyzed. Secondly, the basic idea of Bayesian information fusing is analyzed. Two key prololems are emphasized, which are the formation of prior distributions and the determination of fusing weights. Finally, the new method is presented, which quantifies the human error probability based on Bayesian information fusing. The first three kinds of data are regarded as prior information to form the fused prior distribution. The Bayesian method is used to synthesize all the data and get the posterior distribution. Based on the posterior distribution, the human error probability can be quantified. An example is analyzed, which shows the process of the method and proves its validity.

  13. Fast Domain Partitioning Method for dynamic boundary integral equations applicable to non-planar faults dipping in 3-D elastic half-space

    Science.gov (United States)

    Ando, Ryosuke

    2016-11-01

    The elastodynamic boundary integral equation method (BIEM) in real space and in the temporal domain is an accurate semi-analytical tool to investigate the earthquake rupture dynamics on non-planar faults. However, its heavy computational demand for a historic integral generally increases with a time complexity of O(MN3)for the number of time steps N and elements M due to volume integration in the causality cone. In this study, we introduce an efficient BIEM, termed the `Fast Domain Partitioning Method' (FDPM), which enables us to reduce the computation time to the order of the surface integral, O(MN2), without degrading the accuracy. The memory requirement is also reduced to O(M2) from O(M2N). FDPM uses the physical nature of Green's function for stress to partition the causality cone into the domains of the P and S wave fronts, the domain in-between the P and S wave fronts, and the domain of the static equilibrium, where the latter two domains exhibit simpler dependences on time and/or space. The scalability of this method is demonstrated on the large-scale parallel computing environments of distributed memory systems. It is also shown that FDPM enables an efficient use of memory storage, which makes it possible to reduce computation times to a previously unprecedented level. We thus present FDPM as a powerful tool to break through the current fundamental difficulties in running dynamic simulations of coseismic ruptures and earthquake cycles under realistic conditions of fault geometries.

  14. Bayesian analysis of rare events

    Energy Technology Data Exchange (ETDEWEB)

    Straub, Daniel, E-mail: straub@tum.de; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  15. 基于通信痕迹的网络威胁分区方法研究%Network Threat Partition Method Based on the Communication Trace

    Institute of Scientific and Technical Information of China (English)

    甄姬娜; 盛光磊

    2014-01-01

    The current network attack detection, are not fully considered directly attack item that the lack of direct attack the intrinsic connection between attention, lead to the classification of network attacks and threats partition accuracy is not high. In order to solve this problem, this paper puts forward a trace based on communication network threat partition method. Through the extraction network threat intrinsic characteristic of principal component characteristics, constructing the last attack left traces of communication, according to the communication traces of this feedback attack partitions, guarantee in the areas with similar attack characteristics, for later attack map construction to lay the foundation. The computer simulation proved that change method can well solve the network to threat the disadvantages of lack of correla-tion detection and improve the intrusion detection accuracy.%当前的网络攻击检测都没有充分考虑攻击的直接联系性,对攻击内在的直接关联缺少关注,导致攻击分类和网络威胁分区的准确性不高。为了解决这一问题,提出一种基于通信痕迹的网络威胁分区方法。通过提取网络威胁内在特有的主成份特征,构建出上一次攻击留下的通信痕迹,根据通信痕迹的反馈对本次攻击进行分区,保证同区域内的攻击特征类似,为后期的攻击图谱构建打下基础。计算机仿真实验证明,该方法可以很好的解决网络威胁检测缺少关联性的弊端,提高了入侵检测的准确度。

  16. Naive Bayesian for Email Filtering

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The paper presents a method of email filter based on Naive Bayesian theory that can effectively filter junk mail and illegal mail. Furthermore, the keys of implementation are discussed in detail. The filtering model is obtained from training set of email. The filtering can be done without the users specification of filtering rules.

  17. Bayesian NL interpretation and learning

    NARCIS (Netherlands)

    Zeevat, H.

    2011-01-01

    Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language

  18. Hepatitis disease detection using Bayesian theory

    Science.gov (United States)

    Maseleno, Andino; Hidayati, Rohmah Zahroh

    2017-02-01

    This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.

  19. Stability criteria for T-S fuzzy systems with interval time-varying delays and nonlinear perturbations based on geometric progression delay partitioning method.

    Science.gov (United States)

    Chen, Hao; Zhong, Shouming; Li, Min; Liu, Xingwen; Adu-Gyamfi, Fehrs

    2016-07-01

    In this paper, a novel delay partitioning method is proposed by introducing the theory of geometric progression for the stability analysis of T-S fuzzy systems with interval time-varying delays and nonlinear perturbations. Based on the common ratio α, the delay interval is unequally separated into multiple subintervals. A newly modified Lyapunov-Krasovskii functional (LKF) is established which includes triple-integral terms and augmented factors with respect to the length of every related proportional subintervals. In addition, a recently developed free-matrix-based integral inequality is employed to avoid the overabundance of the enlargement when dealing with the derivative of the LKF. This innovative development can dramatically enhance the efficiency of obtaining the maximum upper bound of the time delay. Finally, much less conservative stability criteria are presented. Numerical examples are conducted to demonstrate the significant improvements of this proposed approach.

  20. An alternative method to isolate protease and phospholipase A2 toxins from snake venoms based on partitioning of aqueous two-phase systems

    Directory of Open Access Journals (Sweden)

    GN Gómez

    2012-01-01

    Full Text Available Snake venoms are rich sources of active proteins that have been employed in the diagnosis and treatment of health disorders and antivenom therapy. Developing countries demand fast economical downstream processes for the purification of this biomolecule type without requiring sophisticated equipment. We developed an alternative, simple and easy to scale-up method, able to purify simultaneously protease and phospholipase A2 toxins from Bothrops alternatus venom. It comprises a multiple-step partition procedure with polyethylene-glycol/phosphate aqueous two-phase systems followed by a gel filtration chromatographic step. Two single bands in SDS-polyacrylamide gel electrophoresis and increased proteolytic and phospholipase A2 specific activities evidence the homogeneity of the isolated proteins.

  1. Estudo da prevalência da tuberculose: uso de métodos bayesianos Study of the prevalence of tuberculosis using Bayesian methods

    Directory of Open Access Journals (Sweden)

    Jorge Alberto Achcar

    2003-12-01

    Full Text Available Neste artigo, apresentamos estimadores bayesianos para a prevalência de tuberculose usando métodos computacionais de simulação de amostras da distribuição a posteriori de interesse. Em especial, consideramos o uso do amostrador de Gibbs para simular amostras da distribuição a posteriori, e daí encontramos, em uma forma simples, inferências precisas para a prevalência de tuberculose. Em uma aplicação, analisamos os resultados do exame de Rx do tórax no diagnóstico da tuberculose. Com essa aplicação, verificamos que os estimadores bayesianos são simples de se obter e apresentam grande precisão. O uso de métodos computacionais para simulação de amostras como o caso do amostrador de Gibbs tem sido recentemente muito utilizado para análise bayesiana de modelos em bioestatística. Essas técnicas de simulação usando o amostrador de Gibbs são facilmente implementadas e não exigem muito conhecimento computacional, podendo ser programadas em qualquer software disponível. Além disso, essas técnicas podem ser consideradas para o estudo da prevalência de outras doenças.In this paper we present Bayesian estimators of the prevalence of tuberculosis using computational methods for simulation of samples of posterior distribution of interest. We especially considered the Gibbs sampling algorithm to generate samples of posterior distribution, and from these samples we obtained accurate inferences for the prevalence of tuberculosis. In an application, we analyzed the results of lung X-ray tests in the diagnosis of tuberculosis. With this application, we verified that Bayesian estimators are more accurate than some existing estimators usually considered by health researchers. The use of computational methods for simulation of samples as the case of the Gibbs sampling algorithm is becoming very popular for Bayesian analysis in biostatistics. These simulation techniques using the Gibbs sampling algorithm are easily implemented and do

  2. Application of integrated Bayesian modeling and Markov chain Monte Carlo methods to the conservation of a harvested species

    Directory of Open Access Journals (Sweden)

    Fonnesbeck, C. J.

    2004-06-01

    Full Text Available When endeavoring to make informed decisions, conservation biologists must frequently contend with disparate sources of data and competing hypotheses about the likely impacts of proposed decisions on the resource status. Frequently, statistical analyses, modeling (e.g., for population projection and optimization or simulation are conducted as separate exercises. For example, a population model might be constructed, whose parameters are then estimated from data (e.g., ringing studies, population surveys. This model might then be used to predict future population states, from current population estimates, under a particular management regime. Finally, the parameterized model might also be used to evaluate alternative candidate management decisions, via simulation, optimization, or both. This approach, while effective, does not take full advantage of the integration of data and model components for prediction and updating; we propose a hierarchical Bayesian context for this integration. In the case of American black ducks (Anas rubripes, managers are simultaneously faced with trying to extract a sustainable harvest from the species, while maintaining individual stocks above acceptable thresholds. The problem is complicated by spatial heterogeneity in the growth rates and carrying capacity of black ducks stocks, movement between stocks, regional differences in the intensity of harvest pressure, and heterogeneity in the degree of competition from a close congener, mallards (Anas platyrynchos among stocks. We have constructed a population life cycle model that takes these components into account and simultaneously performs parameter estimation and population prediction in a Bayesian framework. Ringing data are used to develop posterior predictive distributions for harvest mortality rates, given as input decisions about harvest regulations. Population surveys of black ducks and mallards are used to obtain stock-specific estimates of population size for

  3. Gentile statistics and restricted partitions

    Indian Academy of Sciences (India)

    C S Srivatsan; M V N Murthy; R K Bhaduri

    2006-03-01

    In a recent paper (Tran et al, Ann. Phys. 311, 204 (2004)), some asymptotic number theoretical results on the partitioning of an integer were derived exploiting its connection to the quantum density of states of a many-particle system. We generalise these results to obtain an asymptotic formula for the restricted or coloured partitions $p_{k}^{s} (n)$, which is the number of partitions of an integer into the summand of th powers of integers such that each power of a given integer may occur utmost times. While the method is not rigorous, it reproduces the well-known asymptotic results for = 1 apart from yielding more general results for arbitrary values of .

  4. Comparison of Bayesian and classical methods in the analysis of cluster randomized controlled trials with a binary outcome: The Community Hypertension Assessment Trial (CHAT

    Directory of Open Access Journals (Sweden)

    Dolovich Lisa

    2009-06-01

    Full Text Available Abstract Background Cluster randomized trials (CRTs are increasingly used to assess the effectiveness of interventions to improve health outcomes or prevent diseases. However, the efficiency and consistency of using different analytical methods in the analysis of binary outcome have received little attention. We described and compared various statistical approaches in the analysis of CRTs using the Community Hypertension Assessment Trial (CHAT as an example. The CHAT study was a cluster randomized controlled trial aimed at investigating the effectiveness of pharmacy-based blood pressure clinics led by peer health educators, with feedback to family physicians (CHAT intervention against Usual Practice model (Control, on the monitoring and management of BP among older adults. Methods We compared three cluster-level and six individual-level statistical analysis methods in the analysis of binary outcomes from the CHAT study. The three cluster-level analysis methods were: i un-weighted linear regression, ii weighted linear regression, and iii random-effects meta-regression. The six individual level analysis methods were: i standard logistic regression, ii robust standard errors approach, iii generalized estimating equations, iv random-effects meta-analytic approach, v random-effects logistic regression, and vi Bayesian random-effects regression. We also investigated the robustness of the estimates after the adjustment for the cluster and individual level covariates. Results Among all the statistical methods assessed, the Bayesian random-effects logistic regression method yielded the widest 95% interval estimate for the odds ratio and consequently led to the most conservative conclusion. However, the results remained robust under all methods – showing sufficient evidence in support of the hypothesis of no effect for the CHAT intervention against Usual Practice control model for management of blood pressure among seniors in primary care. The

  5. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M. (Peterson AFB, CO); Ray, J. P.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.

  6. Bayesian networks: a new method for the modeling of bibliographic knowledge: application to fall risk assessment in geriatric patients.

    Science.gov (United States)

    Lalande, Laure; Bourguignon, Laurent; Carlier, Chloé; Ducher, Michel

    2013-06-01

    Falls in geriatry are associated with important morbidity, mortality and high healthcare costs. Because of the large number of variables related to the risk of falling, determining patients at risk is a difficult challenge. The aim of this work was to validate a tool to detect patients with high risk of fall using only bibliographic knowledge. Thirty articles corresponding to 160 studies were used to modelize fall risk. A retrospective case-control cohort including 288 patients (88 ± 7 years) and a prospective cohort including 106 patients (89 ± 6 years) from two geriatric hospitals were used to validate the performances of our model. We identified 26 variables associated with an increased risk of fall. These variables were split into illnesses, medications, and environment. The combination of the three associated scores gives a global fall score. The sensitivity and the specificity were 31.4, 81.6, 38.5, and 90 %, respectively, for the retrospective and the prospective cohort. The performances of the model are similar to results observed with already existing prediction tools using model adjustment to data from numerous cohort studies. This work demonstrates that knowledge from the literature can be synthesized with Bayesian networks.

  7. An optimized Method to Identify RR Lyrae stars in the SDSS X Pan-STARRS1 Overlapping Area Using a Bayesian Generative Technique

    CERN Document Server

    Abbas, M A; Martin, N F; Kaiser, N; Burgett, W S; Huber, M E; Waters, C

    2014-01-01

    We present a method for selecting RR Lyrae (RRL) stars (or other type of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8,115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3pi survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ~77% and ~52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completene...

  8. Mobile Application Partition Method Based on Tag in Cloud Environment%云环境下基于标记的移动应用划分方法

    Institute of Scientific and Technical Information of China (English)

    樊新; 高曙

    2015-01-01

    In order to solve the resource -constrained problem to mobile devices and save the energy consumption , a tag-based mobile application partition method was proposed .It is a way to partition the mobile application according to its functional structure in advance and tag the transferable application module .The abundant resources and strong ability of cloud computing technology were employed to process the information .And combining with transfer energy consumption model , it was determined whether the mobile application tag module was transferred to the cloud to remote execution or not .At last, through wireless net-work, the cloud execution results were return so as to extend equipment resources and save the energy consumption .%针对移动设备资源受限及节省其能耗的问题,提出了一种基于标记的移动应用划分方法。该方法根据移动应用功能结构对其进行划分,将可转移执行的应用模块进行标记,利用云计算丰富的资源和强大的信息处理能力,结合转移能耗模型,决定移动应用标记模块是否转移到云端远程执行,通过无线网络,返回云端执行结果,从而达到扩展移动设备资源,节省其能耗的目的。

  9. Bayesian analysis for kaon photoproduction

    Energy Technology Data Exchange (ETDEWEB)

    Marsainy, T., E-mail: tmart@fisika.ui.ac.id; Mart, T., E-mail: tmart@fisika.ui.ac.id [Department Fisika, FMIPA, Universitas Indonesia, Depok 16424 (Indonesia)

    2014-09-25

    We have investigated contribution of the nucleon resonances in the kaon photoproduction process by using an established statistical decision making method, i.e. the Bayesian method. This method does not only evaluate the model over its entire parameter space, but also takes the prior information and experimental data into account. The result indicates that certain resonances have larger probabilities to contribute to the process.

  10. Bayesian Games with Intentions

    Directory of Open Access Journals (Sweden)

    Adam Bjorndahl

    2016-06-01

    Full Text Available We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  11. Performance of the 'material Failure Forecast Method' in real-time situations: A Bayesian approach applied on effusive and explosive eruptions

    Science.gov (United States)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.; Arámbula-Mendoza, R.; Budi-Santoso, A.

    2016-11-01

    Most attempts of deterministic eruption forecasting are based on the material Failure Forecast Method (FFM). This method assumes that a precursory observable, such as the rate of seismic activity, can be described by a simple power law which presents a singularity at a time close to the eruption onset. Until now, this method has been applied only in a small number of cases, generally for forecasts in hindsight. In this paper, a rigorous Bayesian approach of the FFM designed for real-time applications is applied. Using an automatic recognition system, seismo-volcanic events are detected and classified according to their physical mechanism and time series of probability distributions of the rates of events are calculated. At each time of observation, a Bayesian inversion provides estimations of the exponent of the power law and of the time of eruption, together with their probability density functions. Two criteria are defined in order to evaluate the quality and reliability of the forecasts. Our automated procedure has allowed the analysis of long, continuous seismic time series: 13 years from Volcán de Colima, Mexico, 10 years from Piton de la Fournaise, Reunion Island, France, and several months from Merapi volcano, Java, Indonesia. The new forecasting approach has been applied to 64 pre-eruptive sequences which present various types of dominant seismic activity (volcano-tectonic or long-period events) and patterns of seismicity with different level of complexity. This has allowed us to test the FFM assumptions, to determine in which conditions the method can be applied, and to quantify the success rate of the forecasts. 62% of the precursory sequences analysed are suitable for the application of FFM and half of the total number of eruptions are successfully forecast in hindsight. In real-time, the method allows for the successful forecast of 36% of all the eruptions considered. Nevertheless, real-time forecasts are successful for 83% of the cases that fulfil the

  12. Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics.

    Directory of Open Access Journals (Sweden)

    Korsuk Sirinukunwattana

    Full Text Available Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites.google.com/site/gaussianbhc/

  13. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  14. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  15. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  16. Partitive descriptions in Korean

    Directory of Open Access Journals (Sweden)

    Keun Young Shin

    2017-02-01

    Full Text Available This paper examines Korean partitive constructions to investigate the typology of the partitive structure. In Korean, a quantifier precedes the nominal in a non-partitive, but it follows the nominal in a partitive. The relative order between a quantifier and its associated nominal indicates that a quantifier in Korean partitive does not function as a NP adjunct but takes a DP as its argument. I argue that Korean postnominal (floating quantifier constructions can be interpreted as partitives or pseudo-partitives/quantitatives because a postnominal (floating quantifier denoting a part-of relation can occur with a kind-denoting DP as well as a definite DP. I also propose that a quantifier denoting a part-of relation is associated with the argument of a verb via composition with a verbal predicate in the floating quantifier construction. This approach can provide an account for several idiosyncratic properties of floating quantifier constructions, which are difficult to capture under the assumption that a floating quantifier construction is derived by moving a quantifier away from its associated nominal. This article is part of the Special Collection: Partitives

  17. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few

  18. Bayesian priors and nuisance parameters

    CERN Document Server

    Gupta, Sourendu

    2016-01-01

    Bayesian techniques are widely used to obtain spectral functions from correlators. We suggest a technique to rid the results of nuisance parameters, ie, parameters which are needed for the regularization but cannot be determined from data. We give examples where the method works, including a pion mass extraction with two flavours of staggered quarks at a lattice spacing of about 0.07 fm. We also give an example where the method does not work.

  19. ISAR Imaging Method Based on the Bayesian Group-sparse Modeling%基于块稀疏贝叶斯模型的ISAR成像方法

    Institute of Scientific and Technical Information of China (English)

    吴称光; 邓彬; 苏伍各; 王宏强; 秦玉亮

    2015-01-01

    传统ISAR稀疏成像主要针对独立散射点散射系数的重构问题,然而实际情况下目标散射点之间并不是独立存在的,而是以区域或块的形式存在,在该情形下利用常用的稀疏重构算法并不能完全地刻画块状目标的真实结构,因此该文考虑采用块稀疏重构算法进行目标散射系数重建.基于块稀疏贝叶斯模型和变分推理的重构方法(VBGS),包含了稀疏贝叶斯学习(SBL)方法中参数学习的优点,其利用分层的先验分布来表征未知信号的稀疏块状信息,因而相对于现有的恢复算法能够更好地重建块稀疏信号.该方法基于变分贝叶斯推理原理,根据观测量能自动地估计信号未知参数,而无需人工参数设置.针对稀疏块状目标,该文结合压缩感知(CS)理论将VBGS方法用于ISAR成像,仿真实验成像结果表明该方法优于传统的成像结果,适合于具有块状结构的ISAR目标成像.%The traditional sparse ISAR imaging method mainly considers the recovery of coefficients on individual scatters. However, in the practice situation, the target scatters presented by blocks or groups do not emerge on individual. In this case, the usual sparse recover algorithm can not depict the shape of real target, thus, the group-sparse recover approaches are adopted to reconstruct the coefficients of target scatters. The recovery method based on the Bayesian Group-Sparse modeling and Variational inference (VBGS) uses a hierarchical construction of a general signal prior to model the group sparse signals and contain the merit of Sparse Bayesian Learning (SBL) on parameters learning, as a result, it can reconstruct the group sparse signal better than the usual recover algorithm. The VBGS method uses the variational Bayesian inference approach to estimate the parameters of the unknown signal automatically and does not require the parameter-tuning procedures. Considering the sparse group target, this paper combines the

  20. Power System Fault Diagnosis Method Based on Bayesian Petri Nets%一种基于贝叶斯Petri网的故障诊断方法

    Institute of Scientific and Technical Information of China (English)

    佘维; 叶阳东

    2011-01-01

    针对Petri网在分析复杂电力系统时的容错性差且难以适应网络拓扑变化的问题,提出一种贝叶斯Petri网模型(BPN),并基于该模型提出一种电网故障诊断方法.该方法通过电力系统网络拓扑分析确定停电区域,随后按照故障蔓延方向对停电区域内的元件分别建立BPN模型,应用Petri网推理和贝叶斯概率计算确定故障元件,最后采用均值方法对各方向上的分析结果进行融合,诊断分析表明,该方法在信息不完备的情况下具有较好的容错性,并且在网络拓扑结构发生变化后仍具有较好的适应性.由于BPN推理时根据基于统计的先验概率求取元件故障的发生概率,避免了直接对计算参数进行设定的主观性.%For Petri net's fault tolerance was poor and it was difficult to adapt to the topological change of network in the analysis of complex power system, this paper proposed a kind of Bayesian Petri net model, and gave a fault diagnosis method for power grid based on this model. According to the fault spread directions, this method established the BPN models for each component in power cut area separately, and determined the failed component by the application of Petri net inference and Bayesian probability calculation, finally, fused the results by averaging method. Diagnosis analysis showed that this method had better fault tolerance under the condition of incomplete information, and had better adaptability after the change of network topology structure. In the BPN inference, calculating the fault probability of component based on the prior probability from statistics, avoided the subjectivity of setting related parameters directly.

  1. Privacy Preserving Multiview Point Based BAT Clustering Algorithm and Graph Kernel Method for Data Disambiguation on Horizontally Partitioned Data

    Directory of Open Access Journals (Sweden)

    J. Anitha

    2015-06-01

    Full Text Available Data mining has been a popular research area for more than a decade due to its vast spectrum of applications. However, the popularity and wide availability of data mining tools also raised concerns about the privacy of individuals. Thus, the burden of data privacy protection falls on the shoulder of the data holder and data disambiguation problem occurs in the data matrix, anonymized data becomes less secure. All of the existing privacy preservation clustering methods performs clustering based on single point of view, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. To solve this all of above mentioned problems, this study presents a multiview point based clustering methods for anonymized data. Before that data disambiguation problem is solved by using Ramon-Gartner Subtree Graph Kernel (RGSGK, where the weight values are assigned and kernel value is determined for disambiguated data. Obtain privacy by anonymization, where the data is encrypted with secure key is obtained by the Ring-Based Fully Homomorphic Encryption (RBFHE. In order to group the anonymize data, in this study BAT clustering method is proposed based on multiview point based similarity measurement and the proposed method is called as MVBAT. However in this paper initially distance matrix is calculated and using which similarity matrix and dissimilarity matrix is formed. The experimental result of the proposed MVBAT Clustering algorithm is compared with conventional methods in terms of the F-Measure, running time, privacy loss and utility loss. RBFHE encryption results is also compared with existing methods in terms of the communication cost for UCI machine learning datasets such as adult dataset and house dataset.

  2. Bayesian Causal Induction

    CERN Document Server

    Ortega, Pedro A

    2011-01-01

    Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans use strong prior knowledge; and rather than encoding hard causal relationships, they encode beliefs over causal structures, allowing for sound generalization from the observations they obtain from directly acting in the world. In this work we propose a Bayesian approach to causal induction which allows modeling beliefs over multiple causal hypotheses and predicting the behavior of the world under causal interventions. We then illustrate how this method extracts causal information from data containing interventions and observations.

  3. Bayesian inference in geomagnetism

    Science.gov (United States)

    Backus, George E.

    1988-01-01

    The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.

  4. Prediction of the air-water partition coefficient for perfluoro-2-methyl-3-pentanone using high-level Gaussian-4 composite theoretical methods.

    Science.gov (United States)

    Rayne, Sierra; Forest, Kaya

    2014-09-19

    The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.

  5. 基于格网划分的Delaunay三角剖分算法研究%Study of Massive Data Delaunay Triangulation Based on Grid Partition Method

    Institute of Scientific and Technical Information of China (English)

    李小丽; 陈花竹

    2011-01-01

    为了提高海量数据的Delaunay三角网的构网速度,本文采用格网划分的三角剖分方法,首先将数据按照线性四叉树方式划分为若干格网块,构建块内子三角网,然后按照自下而上的合并方式对块进行合并,形成全局Delaunay三角网.在此基础上,为了避免出现过小锐角的情况,通过加入约束角来对三角格网进行优化.%To raise the speed of the construction of Delaunay triangulation oriented massive data, this thesis uses the grid partition method. At first, it divides the data into certain grid tiles by quadtree method, constructs sub Delaunay triangulation. Then, it merges two triangulations from bottom up to form the whole Delaunay triangulation. On the basis of that, to avoid producing too acute angles, we give a threshold angle to improve the angles of the triangulation.

  6. An optimized method to identify RR Lyrae stars in the SDSS×Pan-STARRS1 overlapping area using a bayesian generative technique

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mohamad; Grebel, Eva K. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Martin, N. F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Kaiser, N.; Burgett, W. S.; Huber, M. E.; Waters, C., E-mail: mabbas@ari.uni-heidelberg.de [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States)

    2014-07-01

    We present a method for selecting RR Lyrae (RRL) stars (or other types of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3π survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ∼77% and ∼52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completeness and efficiency levels will further improve with the additional PS1 epochs (∼3 epochs per filter) that will be observed before the conclusion of the survey. A comparison between our efficiency and completeness levels using the GMM method to the efficiency and completeness levels using rectangular cuts that are commonly used yielded a significant increase in the efficiency level from ∼13% to ∼77% and an insignificant change in the completeness levels. Hence, we favor using the GMM technique in future studies. Although we develop it over the SDSS×PS1 footprint, the technique presented here would work well on any multi-band, multi-epoch survey for which the number of epochs is limited.

  7. Pretreatment method for immunoassay of polychlorinated biphenyls in transformer oil using multilayer capillary column and microfluidic liquid-liquid partitioning.

    Science.gov (United States)

    Aota, Arata; Date, Yasumoto; Terakado, Shingo; Ohmura, Naoya

    2013-01-01

    Polychlorinated biphenyls (PCBs) are persistent organic pollutants that are present in the insulating oil inside a large number of transformers. To aid in eliminating PCB-contaminated transformers, PCBs in oil need to be measured using a rapid and cost-effective analytical method. We previously reported a pretreatment method for the immunoassay of PCBs in oil using a large-scale multilayer column and a microchip with multiple microrecesses, which permitted concentrated solvent extraction. In this paper, we report on a more rapid and facile pretreatment method, without an evaporation process, by improving the column and the microchip. In a miniaturized column, the decomposition and separation of oil were completed in 2 min. PCBs can be eluted from the capillary column at concentrations seven-times higher than those from the previous column. The total volume of the microrecesses was increased by improving the microrecess structure, the enabling extraction of four-times the amount of PCBs achieved with the previous system. By interfacing the capillary column with the improved microchip, PCBs in the eluate from the column were extracted into dimethyl sulfoxide in microrecesses with high enrichment and without the need for evaporation. Pretreatment was completed within 20 min. The pretreated oil was analyzed using a flow-based kinetic exclusion immunoassay. The limit of detection of PCBs in oil was 0.15 mg kg(-1), which satisfies the criterion set in Japan of 0.5 mg kg(-1).

  8. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  9. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  10. Fuzzy Functional Dependencies and Bayesian Networks

    Institute of Scientific and Technical Information of China (English)

    LIU WeiYi(刘惟一); SONG Ning(宋宁)

    2003-01-01

    Bayesian networks have become a popular technique for representing and reasoning with probabilistic information. The fuzzy functional dependency is an important kind of data dependencies in relational databases with fuzzy values. The purpose of this paper is to set up a connection between these data dependencies and Bayesian networks. The connection is done through a set of methods that enable people to obtain the most information of independent conditions from fuzzy functional dependencies.

  11. The efficacy and safety of triple inhaled treatment in patients with chronic obstructive pulmonary disease: a systematic review and meta-analysis using Bayesian methods

    Directory of Open Access Journals (Sweden)

    Kwak MS

    2015-11-01

    Full Text Available Min-Sun Kwak,1 Eunyoung Kim,2 Eun Jin Jang,3 Hyun Jung Kim,4 Chang-Hoon Lee5 1Department of Internal Medicine, Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Republic of Korea; 2Department of Statistics, Kyungpook National University, Daegu, Republic of Korea; 3Department of Information Statistics, Andong National University, Andong, Republic of Korea; 4Department of Preventive Medicine, College of Medicine, Korea University, Seoul, Republic of Korea; 5Department of Internal Medicine, Division of Pulmonary and Critical Care Medicine, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea Purpose: Although tiotropium (TIO and inhaled corticosteroid (ICS/long-acting β-agonists are frequently prescribed together, the efficacy of “triple therapy” has not been scientifically demonstrated. We conducted a systematic review and meta-analysis using Bayesian methods to compare triple therapy and TIO monotherapy.Methods: We searched the MEDLINE, EMBASE, and Cochrane Library databases for randomized controlled trials comparing the efficacy and safety of triple therapy and TIO monotherapy in patients with chronic obstructive pulmonary disease (COPD. We conducted a meta-analysis to compare the effectiveness and safety of triple therapy and TIO monotherapy using Bayesian random effects models.Results: Seven trials were included, and the risk of bias in the majority of the studies was acceptable. There were no statistically significant differences in the incidence of death and acute exacerbation of disease in the triple therapy and TIO monotherapy groups. Triple therapy improved the prebronchodilator forced expiratory volume in 1 second (mean difference [MD], 63.68 mL; 95% credible interval [CrI], 45.29–82.73, and patients receiving triple therapy showed more improvement in St George Respiratory Questionnaire scores (MD, -3.11 points; 95% Cr

  12. Hyainailourine and teratodontine cranial material from the late Eocene of Egypt and the application of parsimony and Bayesian methods to the phylogeny and biogeography of Hyaenodonta (Placentalia, Mammalia

    Directory of Open Access Journals (Sweden)

    Matthew R. Borths

    2016-11-01

    Full Text Available Hyaenodonta is a diverse, extinct group of carnivorous mammals that included weasel- to rhinoceros-sized species. The oldest-known hyaenodont fossils are from the middle Paleocene of North Africa and the antiquity of the group in Afro-Arabia led to the hypothesis that it originated there and dispersed to Asia, Europe, and North America. Here we describe two new hyaenodont species based on the oldest hyaenodont cranial specimens known from Afro-Arabia. The material was collected from the latest Eocene Locality 41 (L-41, ∼34 Ma in the Fayum Depression, Egypt. Akhnatenavus nefertiticyon sp. nov. has specialized, hypercarnivorous molars and an elongate cranial vault. In A. nefertiticyon the tallest, piercing cusp on M1–M2 is the paracone. Brychotherium ephalmos gen. et sp. nov. has more generalized molars that retain the metacone and complex talonids. In B. ephalmos the tallest, piercing cusp on M1–M2 is the metacone. We incorporate this new material into a series of phylogenetic analyses using a character-taxon matrix that includes novel dental, cranial, and postcranial characters, and samples extensively from the global record of the group. The phylogenetic analysis includes the first application of Bayesian methods to hyaenodont relationships. B. ephalmos is consistently placed within Teratodontinae, an Afro-Arabian clade with several generalist and hypercarnivorous forms, and Akhnatenavus is consistently recovered in Hyainailourinae as part of an Afro-Arabian radiation. The phylogenetic results suggest that hypercarnivory evolved independently three times within Hyaenodonta: in Teratodontinae, in Hyainailourinae, and in Hyaenodontinae. Teratodontines are consistently placed in a close relationship with Hyainailouridae (Hyainailourinae + Apterodontinae to the exclusion of “proviverrines,” hyaenodontines, and several North American clades, and we propose that the superfamily Hyainailouroidea be used to describe this relationship. Using

  13. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  14. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  15. Konstruksi Bayesian Network Dengan Algoritma Bayesian Association Rule Mining Network

    OpenAIRE

    Octavian

    2015-01-01

    Beberapa tahun terakhir, Bayesian Network telah menjadi konsep yang populer digunakan dalam berbagai bidang kehidupan seperti dalam pengambilan sebuah keputusan dan menentukan peluang suatu kejadian dapat terjadi. Sayangnya, pengkonstruksian struktur dari Bayesian Network itu sendiri bukanlah hal yang sederhana. Oleh sebab itu, penelitian ini mencoba memperkenalkan algoritma Bayesian Association Rule Mining Network untuk memudahkan kita dalam mengkonstruksi Bayesian Network berdasarkan data ...

  16. Effect of corn preparation methods on dry-grind ethanol production by granular starch hydrolysis and partitioning of spent beer solids.

    Science.gov (United States)

    Lamsal, B P; Wang, H; Johnson, L A

    2011-06-01

    Two corn preparation methods, rollermill flaking and hammermill grinding, were compared for efficient processing of corn into ethanol by granular starch hydrolysis and simultaneous fermentation by yeast Saccharomyces cerevisiae. Corn was either ground in a hammermill with different size screens or crushed in a smooth-surfaced rollermill at different roller gap settings. The partitioning of beer solids and size distribution of solids in the thin stillage were compared. The mean particle diameter d(50) for preparations varied with set-ups and ranged between 210 and 340 μm for ground corn, and 1180-1267 μm for flaked corn. The ethanol concentrations in beer were similar (18-19% v/v) for ground and flaked preparations, however, ethanol productivity increased with reduced particle size. Roller versus hammermilling of corn reduced solids in thin stillage by 28%, and doubled the volume percent of fines (d(50) ∼ 7 μm)in thin stillage and decreased coarse (d(50) ∼ 122 μm) by half compared to hammermilling.

  17. PARTITION PROPERTY OF DOMAIN DECOMPOSITION WITHOUT ELLIPTICITY

    Institute of Scientific and Technical Information of China (English)

    Mo Mu; Yun-qing Huang

    2001-01-01

    Partition property plays a central role in domain decomposition methods. Existing theory essentially assumes certain ellipticity. We prove the partition property for problems without ellipticity which are of practical importance. Example applications include implicit schemes applied to degenerate parabolic partial differential equations arising from superconductors, superfluids and liquid crystals. With this partition property, Schwarz algorithms can be applied to general non-elliptic problems with an h-independent optimal convergence rate. Application to the time-dependent Ginzburg-Landau model of superconductivity is illustrated and numerical results are presented.

  18. Bosonic Partition Functions

    CERN Document Server

    Kellerstein, M; Verbaarschot, J J M

    2016-01-01

    The behavior of quenched Dirac spectra of two-dimensional lattice QCD is consistent with spontaneous chiral symmetry breaking which is forbidden according to the Coleman-Mermin-Wagner theorem. One possible resolution of this paradox is that, because of the bosonic determinant in the partially quenched partition function, the conditions of this theorem are violated allowing for spontaneous symmetry breaking in two dimensions or less. This goes back to work by Niedermaier and Seiler on nonamenable symmetries of the hyperbolic spin chain and earlier work by two of the auhtors on bosonic partition functions at nonzero chemical potential. In this talk we discuss chiral symmetry breaking for the bosonic partition function of QCD at nonzero isospin chemical potential and a bosonic random matrix theory at imaginary chemical potential and compare the results with the fermionic counterpart. In both cases the chiral symmetry group of the bosonic partition function is noncompact.

  19. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography; Methodes bayesiennes pour la restauration et la reconstruction d`images application a la gammagraphie et a la tomographie par photofissions

    Energy Technology Data Exchange (ETDEWEB)

    Stawinski, G

    1998-10-26

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  20. Water Environmental Capacity Analysis of Taihu Lake and Parameter Estimation Based on the Integration of the Inverse Method and Bayesian Modeling

    Directory of Open Access Journals (Sweden)

    Ranran Li

    2015-09-01

    Full Text Available An integrated approach using the inverse method and Bayesian approach, combined with a lake eutrophication water quality model, was developed for parameter estimation and water environmental capacity (WEC analysis. The model was used to support load reduction and effective water quality management in the Taihu Lake system in eastern China. Water quality was surveyed yearly from 1987 to 2010. Total nitrogen (TN and total phosphorus (TP were selected as water quality model variables. Decay rates of TN and TP were estimated using the proposed approach. WECs of TN and TP in 2011 were determined based on the estimated decay rates. Results showed that the historical loading was beyond the WEC, thus, reduction of nitrogen and phosphorus input is necessary to meet water quality goals. Then WEC and allowable discharge capacity (ADC in 2015 and 2020 were predicted. The reduction ratios of ADC during these years were also provided. All of these enable decision makers to assess the influence of each loading and visualize potential load reductions under different water quality goals, and then to formulate a reasonable water quality management strategy.

  1. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    Science.gov (United States)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p < 0.05) between MMRG obtained with IF extracted by BICA with respect to IF extracted from measured images corrupted with spillover.

  2. Cosmic bulk flows on 50 {h}^{-1}$Mpc scales: A Bayesian hyper-parameter method and multi-shells likelihood analysis

    CERN Document Server

    Ma, Yin-Zhe

    2012-01-01

    It has been argued recently that the galaxy peculiar velocity field provides evidence of excessive power on scales of $50\\hmpc$, which seems to be inconsistent with the standard $\\Lambda$CDM cosmological model. We discuss several assumptions and conventions used in studies of the large-scale bulk flow to check whether this claim is robust under a variety of conditions. Rather than using a composite catalogue we select samples from the SN, ENEAR, SFI++ and A1SN catalogues, and correct for Malmquist bias in each according to the IRAS PSCz density field. We also use slightly different assumptions about the small-scale velocity dispersion and the parameterisation of the matter power spectrum when calculating the variance of the bulk flow. By combining the likelihood of individual catalogues using a Bayesian hyper-parameter method, we find that the joint likelihood of the amplitude parameter gives $\\sigma_8=0.65^{+0.47}_{-0.35}(\\pm 1 \\sigma)$, which is entirely consistent with the $\\Lambda$CDM model. In addition, ...

  3. Comparison of two Bayesian methods to detect mode effects between paper-based and computerized adaptive assessments: a preliminary Monte Carlo study

    Directory of Open Access Journals (Sweden)

    Riley Barth B

    2012-08-01

    Full Text Available Abstract Background Computerized adaptive testing (CAT is being applied to health outcome measures developed as paper-and-pencil (P&P instruments. Differences in how respondents answer items administered by CAT vs. P&P can increase error in CAT-estimated measures if not identified and corrected. Method Two methods for detecting item-level mode effects are proposed using Bayesian estimation of posterior distributions of item parameters: (1 a modified robust Z (RZ test, and (2 95% credible intervals (CrI for the CAT-P&P difference in item difficulty. A simulation study was conducted under the following conditions: (1 data-generating model (one- vs. two-parameter IRT model; (2 moderate vs. large DIF sizes; (3 percentage of DIF items (10% vs. 30%, and (4 mean difference in θ estimates across modes of 0 vs. 1 logits. This resulted in a total of 16 conditions with 10 generated datasets per condition. Results Both methods evidenced good to excellent false positive control, with RZ providing better control of false positives and with slightly higher power for CrI, irrespective of measurement model. False positives increased when items were very easy to endorse and when there with mode differences in mean trait level. True positives were predicted by CAT item usage, absolute item difficulty and item discrimination. RZ outperformed CrI, due to better control of false positive DIF. Conclusions Whereas false positives were well controlled, particularly for RZ, power to detect DIF was suboptimal. Research is needed to examine the robustness of these methods under varying prior assumptions concerning the distribution of item and person parameters and when data fail to conform to prior assumptions. False identification of DIF when items were very easy to endorse is a problem warranting additional investigation.

  4. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    Science.gov (United States)

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  5. DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    Javed Ali

    2012-04-01

    Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.

  6. Bayesian astrostatistics: a backward look to the future

    CERN Document Server

    Loredo, Thomas J

    2012-01-01

    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects ...

  7. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  8. Simultaneous estimation of glass-water distribution and PDMS-water partition coefficients of hydrophobic organic compounds using simple batch method.

    Science.gov (United States)

    Hsieh, Min-Kai; Fu, Chung-Te; Wu, Shian-chee

    2011-09-15

    A simple batch method by use of refilling and nonrefilling experimental procedures and headspace solid phase microextraction was applied to simultaneously obtain the glass-water distribution coefficients (K(GW)) and polydimethylsiloxane(PDMS)-water partition coefficients (K(PW)) of hydrophobic organic compounds (HOCs). The simple batch method takes into consideration the glass-surface bound HOCs and the corresponding equilibrium distribution of HOCs among the glass, water, headspace, and polydimethylsiloxane (PDMS). The K(PW) and K(GW) values of 53 PCB congeners were determined. The glass-bound fraction predominated over other fractions for highly chlorinated PCBs. Ignoring glass adsorption and assuming a complete mass balance could thus substantially underestimate the K(PW) for HOCs in traditional work. Good linear correlations of logα (the overall mass transfer rate constant) vs logK(PW), logK(PW) vs logK(OW), and logK(GW) vs logK(OW) were observed, with logα = -0.91 logK(PW) + 1.13, R(2) = 0.93; logK(PW) = 1.032 logK(OW) - 0.493, R(2) = 0.947; and logK(GW) = 0.93 logK(OW) - 2.30, R(2) = 0.90. The K(PW) values from this study were compared with those in the literature. With an account of the glass adsorption, the accuracy of the K(PW) determination and the estimation of the dissolved concentration in water for highly hydrophobic compounds can be significantly improved.

  9. Developing Bayesian adaptive methods for estimating sensitivity thresholds (d') in Yes-No and forced-choice tasks.

    Science.gov (United States)

    Lesmes, Luis A; Lu, Zhong-Lin; Baek, Jongsoo; Tran, Nina; Dosher, Barbara A; Albright, Thomas D

    2015-01-01

    Motivated by Signal Detection Theory (SDT), we developed a family of novel adaptive methods that estimate the sensitivity threshold-the signal intensity corresponding to a pre-defined sensitivity level (d' = 1)-in Yes-No (YN) and Forced-Choice (FC) detection tasks. Rather than focus stimulus sampling to estimate a single level of %Yes or %Correct, the current methods sample psychometric functions more broadly, to concurrently estimate sensitivity and decision factors, and thereby estimate thresholds that are independent of decision confounds. Developed for four tasks-(1) simple YN detection, (2) cued YN detection, which cues the observer's response state before each trial, (3) rated YN detection, which incorporates a Not Sure response, and (4) FC detection-the qYN and qFC methods yield sensitivity thresholds that are independent of the task's decision structure (YN or FC) and/or the observer's subjective response state. Results from simulation and psychophysics suggest that 25 trials (and sometimes less) are sufficient to estimate YN thresholds with reasonable precision (s.d. = 0.10-0.15 decimal log units), but more trials are needed for FC thresholds. When the same subjects were tested across tasks of simple, cued, rated, and FC detection, adaptive threshold estimates exhibited excellent agreement with the method of constant stimuli (MCS), and with each other. These YN adaptive methods deliver criterion-free thresholds that have previously been exclusive to FC methods.

  10. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  11. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    Science.gov (United States)

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  12. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  13. Including the spatial variability of metal speciation in the effect factor in life cycle impact assessment: Limits of the equilibrium partitioning method.

    Science.gov (United States)

    Tromson, Clara; Bulle, Cécile; Deschênes, Louise

    2017-03-01

    In life cycle assessment (LCA), the potential terrestrial ecotoxicity effect of metals, calculated as the effect factor (EF), is usually extrapolated from aquatic ecotoxicological data using the equilibrium partitioning method (EqP) as it is more readily available than terrestrial data. However, when following the AMI recommendations (i.e. with at least enough species that represents three different phyla), there are not enough terrestrial data for which soil properties or metal speciation during ecotoxicological testing are specified to account for the influence of soil property variations on metal speciation when using this approach. Alternatively, the TBLM (Terrestrial Biotic Ligand Model) has been used to determine an EF that accounts for speciation, but is not available for metals; hence it cannot be consistently applied to metals in an LCA context. This paper proposes an approach to include metal speciation by regionalizing the EqP method for Cu, Ni and Zn with a geochemical speciation model (the Windermere Humic Aqueous Model 7.0), for 5213 soils selected from the Harmonized World Soil Database. Results obtained by this approach (EF(EqP)regionalized) are compared to the EFs calculated with the conventional EqP method, to the EFs based on available terrestrial data and to the EFs calculated with the TBLM (EF(TBLM)regionalized) when available. The spatial variability contribution of the EF to the overall spatial variability of the characterization factor (CF) has been analyzed. It was found that the EFs(EqP)regionalized show a significant spatial variability. The EFs calculated with the two non-regionalized methods (EqP and terrestrial data) fall within the range of the EFs(EqP)regionalized. The EFs(TBLM)regionalized cover a larger range of values than the EFs(EqP)regionalized but the two methods are not correlated. This paper highlights the importance of including speciation into the terrestrial EF and shows that using the regionalized EqP approach is not an

  14. Density-Driven Partitioning Method for Standard Cell 3D Placement%面向标准单元三维布局的密度驱动划分方法

    Institute of Scientific and Technical Information of China (English)

    蒋艳德; 刘畅; 贺旭; 郭阳

    2016-01-01

    Placement of standard cells is a key stage in very large scale integration circuit design automation. This article presents a density-driven partitioning method for standard cell 3D placement. In the partitioning method, 3D placement region is divided into grids, and the cell density of each grid is calculated. Then the standard cells are allocated into 3D space based on the cell density in each grid. The partitioning method can get an optimal par-titioning result and reduced the overlaps between standard cells and fixed cells effectively. The partitioning me-thod is integrated into a 3D placer 3D-Crust, which conducts the 3D placement in 3D-transformed benchmarks. Experimental results show that the partitioning method can effectively reduce HPWL and runtime by 60.89% and 20.54% respectively compared with the original placer.%标准单元布局是超大规模集成电路自动化设计的一个关键阶段.针对三维布局问题,提出一种面向标准单元的密度驱动划分方法.该方法将三维布局区域划分成网格,并计算每个网格的单元密度,再根据网格单元密度将标准单元合理地划分到三维空间,可有效地减小标准单元与固定单元之间的重叠率,得到优化的划分结果.将文中方法嵌入到三维布局器中,对三维转换的 benchmarks 电路进行三维布局.实验结果表明,与原始三维布局器3D-Crust相比,该方法能够有效地减少HPWL 60.89%,减少运行时间20.54%.

  15. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  16. Water Quality Evaluation Model Using Bayesian Method Based on Entropy Weight%基于熵权法赋权的贝叶斯水质评价模型

    Institute of Scientific and Technical Information of China (English)

    赵晓慎; 张超; 王文川

    2011-01-01

    基于贝叶斯水质评价理论,运用熵权赋权法求各评价指标权重,提出了熵权法赋权的贝叶斯水质评价模型,并以南四湖下级湖徽山站和韩庄闸为例进行了水质评价.结果表明,该模型有效可行,较等权重贝叶斯法对水质所属类别分辨率更高,为水环境保护与治理提供了依据.%Based on Bayesian theory of water quality evaluation, entropy weight empowerment method is used to obtain the weight of each evaluation indicator, and entropy weight empowerment-based Bayesian model of water quality evaluation is presented. And then the proposed model is applied to water quality evaluation of Huishan station and Hanzhuang pivot station located in downstream of Nansi Lake. The results show that the proposed model is feasible and effective for water quality evaluation; compared with the equal weighted Bayesian method, the entropy weight empowerment-based method has higher resolution for categories of water quality. Thus, it provides a reliable basis for water environment protection and management.

  17. A comparison of Bayesian and non-linear regression methods for robust estimation of pharmacokinetics in DCE-MRI and how it affects cancer diagnosis.

    Science.gov (United States)

    Dikaios, Nikolaos; Atkinson, David; Tudisca, Chiara; Purpura, Pierpaolo; Forster, Martin; Ahmed, Hashim; Beale, Timothy; Emberton, Mark; Punwani, Shonit

    2017-03-01

    The aim of this work is to compare Bayesian Inference for nonlinear models with commonly used traditional non-linear regression (NR) algorithms for estimating tracer kinetics in Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI). The algorithms are compared in terms of accuracy, and reproducibility under different initialization settings. Further it is investigated how a more robust estimation of tracer kinetics affects cancer diagnosis. The derived tracer kinetics from the Bayesian algorithm were validated against traditional NR algorithms (i.e. Levenberg-Marquardt, simplex) in terms of accuracy on a digital DCE phantom and in terms of goodness-of-fit (Kolmogorov-Smirnov test) on ROI-based concentration time courses from two different patient cohorts. The first cohort consisted of 76 men, 20 of whom had significant peripheral zone prostate cancer (any cancer-core-length (CCL) with Gleason>3+3 or any-grade with CCL>=4mm) following transperineal template prostate mapping biopsy. The second cohort consisted of 9 healthy volunteers and 24 patients with head and neck squamous cell carcinoma. The diagnostic ability of the derived tracer kinetics was assessed with receiver operating characteristic area under curve (ROC AUC) analysis. The Bayesian algorithm accurately recovered the ground-truth tracer kinetics for the digital DCE phantom consistently improving the Structural Similarity Index (SSIM) across the 50 different initializations compared to NR. For optimized initialization, Bayesian did not improve significantly the fitting accuracy on both patient cohorts, and it only significantly improved the ve ROC AUC on the HN population from ROC AUC=0.56 for the simplex to ROC AUC=0.76. For both cohorts, the values and the diagnostic ability of tracer kinetic parameters estimated with the Bayesian algorithm weren't affected by their initialization. To conclude, the Bayesian algorithm led to a more accurate and reproducible quantification of tracer kinetic

  18. Dynamic Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2011-01-01

    Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...

  19. PAC-Bayesian Policy Evaluation for Reinforcement Learning

    CERN Document Server

    Fard, Mahdi MIlani; Szepesvari, Csaba

    2012-01-01

    Bayesian priors offer a compact yet general means of incorporating domain knowledge into many learning tasks. The correctness of the Bayesian analysis and inference, however, largely depends on accuracy and correctness of these priors. PAC-Bayesian methods overcome this problem by providing bounds that hold regardless of the correctness of the prior distribution. This paper introduces the first PAC-Bayesian bound for the batch reinforcement learning problem with function approximation. We show how this bound can be used to perform model-selection in a transfer learning scenario. Our empirical results confirm that PAC-Bayesian policy evaluation is able to leverage prior distributions when they are informative and, unlike standard Bayesian RL approaches, ignore them when they are misleading.

  20. Development of an Origin Trace Method based on Bayesian Inference and Artificial Neural Network for Missing or Stolen Nuclear Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bin, Yim Ho; Min, Lee Seung; Min, Kim Kyung; Jeong, Hong Yoon; Kim, Jae Kwang [Nuclear Security Div., Daejeon (Korea, Republic of)

    2014-05-15

    Thus, 'to put nuclear materials under control' is an important issue for prosperity mankind. Unfortunately, numbers of illicit trafficking of nuclear materials have been increased for decades. Consequently, security of nuclear materials is recently spotlighted. After the 2{sup nd} Nuclear Security Summit in Seoul in 2012, the president of Korea had showed his devotion to nuclear security. One of the main responses for nuclear security related interest of Korea was to develop a national nuclear forensic support system. International Atomic Energy Agency (IAEA) published the document of Nuclear Security Series No.2 'Nuclear Forensics Support' in 2006 to encourage international cooperation of all IAEA member states for tracking nuclear attributions. There are two main questions related to nuclear forensics to answer in the document. The first question is 'what type of material is it?', and the second one is 'where did the material come from?' Korea Nuclear Forensic Library (K-NFL) and mathematical methods to trace origins of missing or stolen nuclear materials (MSNMs) are being developed by Korea Institute of Nuclear Non-proliferation and Control (KINAC) to answer those questions. Although the K-NFL has been designed to perform many functions, K-NFL is being developed to effectively trace the origin of MSNMs and tested to validate suitability of trace methods. New fuels and spent fuels need each trace method because of the different nature of data acquisition. An inductive logic was found to be appropriate for new fuels, which had values as well as a bistable property. On the other hand, machine learning was suitable for spent fuels, which were unable to measure, and thus needed simulation.

  1. Bayesian missing data problems EM, data augmentation and noniterative computation

    CERN Document Server

    Tan, Ming T; Ng, Kai Wang

    2009-01-01

    Bayesian Missing Data Problems: EM, Data Augmentation and Noniterative Computation presents solutions to missing data problems through explicit or noniterative sampling calculation of Bayesian posteriors. The methods are based on the inverse Bayes formulae discovered by one of the author in 1995. Applying the Bayesian approach to important real-world problems, the authors focus on exact numerical solutions, a conditional sampling approach via data augmentation, and a noniterative sampling approach via EM-type algorithms. After introducing the missing data problems, Bayesian approach, and poste

  2. Polymers as Reference Partitioning Phase: Polymer Calibration for an Analytically Operational Approach To Quantify Multimedia Phase Partitioning.

    Science.gov (United States)

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp

    2016-06-07

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases.

  3. 基于吉布斯抽样的相关观测异常值检测的贝叶斯方法%Bayesian Method for Outlier Detection of Correlated Observation Based on Gibbs Sampling

    Institute of Scientific and Technical Information of China (English)

    魏萌; 王靳辉; 衡广辉

    2012-01-01

    相关观测的异常值检测是测量数据处理的难题之一.在系统总结和分析前人研究成果的基础上,运用贝叶斯统计推断理论,提出了相关观测异常值检测的贝叶斯方法.首先,基于识别变量的后验概率,提出了相关观测异常值定位的贝叶斯方法;然后设计和构建了计算后验概率的吉布斯抽样方法,基于最大后验估计原理,推导和建立了计算异常值参数的贝叶斯公式;最后对某GPS网相连进行了计算和分析.结果表明,在相关观测条件下,使用新方法能够对多个异常值同时进行检测,有效地消除异常值的不良影响.%Outlier of correlated observation are one of the difficult and important problems in data processing. According to systemically reviewing research history of the puzzle, Bayesian detection method was put forward and applied in the GPS network utilizing the modern Bayesian theories and methods. First of all, on the basis of posterior probabilities of classification variables, the Bayesian methods for positioning outlier of correlated observations was proposed, and based on Gibbs sampling, the algorithm for calculating the posterior probability of classification variables was designed. Secondly, modern Bayesian statistical theory was appliedy to deduce Bayesian estimations for outlier. Then, those new methods in a GPS network adjustment was applied. These numerical examples demonstrated that the new methods were effective. In condition of correlated observations, the methods can detect multiple outliers of correlated observation and eliminate the influence of outlier effectively at the same time.

  4. Matrix string partition function

    CERN Document Server

    Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre

    1998-01-01

    We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.

  5. Distributed Evolutionary Graph Partitioning

    CERN Document Server

    Sanders, Peter

    2011-01-01

    We present a novel distributed evolutionary algorithm, KaFFPaE, to solve the Graph Partitioning Problem, which makes use of KaFFPa (Karlsruhe Fast Flow Partitioner). The use of our multilevel graph partitioner KaFFPa provides new effective crossover and mutation operators. By combining these with a scalable communication protocol we obtain a system that is able to improve the best known partitioning results for many inputs in a very short amount of time. For example, in Walshaw's well known benchmark tables we are able to improve or recompute 76% of entries for the tables with 1%, 3% and 5% imbalance.

  6. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.

  7. Bayesian analysis of MEG visual evoked responses

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, D.M.; George, J.S.; Wood, C.C.

    1999-04-01

    The authors developed a method for analyzing neural electromagnetic data that allows probabilistic inferences to be drawn about regions of activation. The method involves the generation of a large number of possible solutions which both fir the data and prior expectations about the nature of probable solutions made explicit by a Bayesian formalism. In addition, they have introduced a model for the current distributions that produce MEG and (EEG) data that allows extended regions of activity, and can easily incorporate prior information such as anatomical constraints from MRI. To evaluate the feasibility and utility of the Bayesian approach with actual data, they analyzed MEG data from a visual evoked response experiment. They compared Bayesian analyses of MEG responses to visual stimuli in the left and right visual fields, in order to examine the sensitivity of the method to detect known features of human visual cortex organization. They also examined the changing pattern of cortical activation as a function of time.

  8. Bayesian selection of nucleotide substitution models and their site assignments.

    Science.gov (United States)

    Wu, Chieh-Hsi; Suchard, Marc A; Drummond, Alexei J

    2013-03-01

    Probabilistic inference of a phylogenetic tree from molecular sequence data is predicated on a substitution model describing the relative rates of change between character states along the tree for each site in the multiple sequence alignment. Commonly, one assumes that the substitution model is homogeneous across sites within large partitions of the alignment, assigns these partitions a priori, and then fixes their underlying substitution model to the best-fitting model from a hierarchy of named models. Here, we introduce an automatic model selection and model averaging approach within a Bayesian framework that simultaneously estimates the number of partitions, the assignment of sites to partitions, the substitution model for each partition, and the uncertainty in these selections. This new approach is implemented as an add-on to the BEAST 2 software platform. We find that this approach dramatically improves the fit of the nucleotide substitution model compared with existing approaches, and we show, using a number of example data sets, that as many as nine partitions are required to explain the heterogeneity in nucleotide substitution process across sites in a single gene analysis. In some instances, this improved modeling of the substitution process can have a measurable effect on downstream inference, including the estimated phylogeny, relative divergence times, and effective population size histories.

  9. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig

    2015-09-18

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  10. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  11. 基于分层贝叶斯网络的反潜编队对潜威胁估计%Method for Evaluate Threat of Anti-submarine War Fleet to Submarine Based on Hierarchical Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    王小龙; 宋裕农; 丁文强

    2014-01-01

    应用贝叶斯网络模型进行威胁估计是当前研究的热点和难点。提出了应用分层贝叶斯网络模型构建威胁估计网络,用于估计反潜编队对潜威胁等级。首先,对分层贝叶斯网络模型进行了简要的阐述,并对利用该模型进行威胁估计的可行性进行了分析。其次,分析了反潜编队对潜威胁产生的原因和机理,在此基础上,构建了威胁估计网络。最后,通过算例仿真验证了威胁估计网络的有效性。%Usage of bayesian network for threat evaluation is a hot and difficult problem. The threat evaluation network is constructed by the method of the bayesian network,which uses for evaluating the threat level of the anti-submarine war fleet. First,the principle of the bayesian network is constructed,and the feasibility of using bayesian network is analyzed for threat evaluation. Second, the causes and mechanism of the anti-submarine fleet submarine threat is analyzed,on this basis,the threat evaluation network is constructed. Finally,simulation experiment is used to verify the validity of the threat evaluation network constructed.

  12. Partitions with Initial Repetitions

    Institute of Scientific and Technical Information of China (English)

    George E. ANDREWS

    2009-01-01

    A variety of interesting connections with modular forms, mock theta functions and Rogers-Ramanujan type identities arise in consideration of partitions in which the smaller integers are repeated as summands more often than the larger summands. In particular, this concept leads to new interpre-tations of the Rogers-Selberg identities and Bailey's modulus 9 identities.

  13. New Aperture Partitioning Element

    Science.gov (United States)

    Griffin, S.; Calef, B.; Williams, S.

    Postprocessing in an optical system can be aided by adding an optical element to partition the pupil into a number of segments. When imaging through the atmosphere, the recorded data are blurred by temperature-induced variations in the index of refraction along the line of sight. Using speckle imaging techniques developed in the astronomy community, this blurring can be corrected to some degree. The effectiveness of these techniques is diminished by redundant baselines in the pupil. Partitioning the pupil reduces the degree of baseline redundancy, and therefore improves the quality of images that can be obtained from the system. It is possible to implement the described approach on an optical system with a segmented primary mirror, but not very practical. This is because most optical systems do not have segmented primary mirrors, and those that do have relatively low bandwidth positioning of segments due to their large mass and inertia. It is much more practical to position an active aperture partitioning element at an aft optics pupil of the optical system. This paper describes the design, implementation and testing of a new aperture partitioning element that is completely reflective and reconfigurable. The device uses four independent, annular segments that can be positioned with a high degree of accuracy without impacting optical wavefront of each segment. This mirror has been produced and is currently deployed and working on the 3.6 m telescope.

  14. Bayesian Inference Networks and Spreading Activation in Hypertext Systems.

    Science.gov (United States)

    Savoy, Jacques

    1992-01-01

    Describes a method based on Bayesian networks for searching hypertext systems. Discussion covers the use of Bayesian networks for structuring index terms and representing user information needs; use of link semantics based on constrained spreading activation to find starting points for browsing; and evaluation of a prototype system. (64…

  15. Hopes and Cautions in Implementing Bayesian Structural Equation Modeling

    Science.gov (United States)

    MacCallum, Robert C.; Edwards, Michael C.; Cai, Li

    2012-01-01

    Muthen and Asparouhov (2012) have proposed and demonstrated an approach to model specification and estimation in structural equation modeling (SEM) using Bayesian methods. Their contribution builds on previous work in this area by (a) focusing on the translation of conventional SEM models into a Bayesian framework wherein parameters fixed at zero…

  16. Multi-Fraction Bayesian Sediment Transport Model

    Directory of Open Access Journals (Sweden)

    Mark L. Schmelter

    2015-09-01

    Full Text Available A Bayesian approach to sediment transport modeling can provide a strong basis for evaluating and propagating model uncertainty, which can be useful in transport applications. Previous work in developing and applying Bayesian sediment transport models used a single grain size fraction or characterized the transport of mixed-size sediment with a single characteristic grain size. Although this approach is common in sediment transport modeling, it precludes the possibility of capturing processes that cause mixed-size sediments to sort and, thereby, alter the grain size available for transport and the transport rates themselves. This paper extends development of a Bayesian transport model from one to k fractional dimensions. The model uses an existing transport function as its deterministic core and is applied to the dataset used to originally develop the function. The Bayesian multi-fraction model is able to infer the posterior distributions for essential model parameters and replicates predictive distributions of both bulk and fractional transport. Further, the inferred posterior distributions are used to evaluate parametric and other sources of variability in relations representing mixed-size interactions in the original model. Successful OPEN ACCESS J. Mar. Sci. Eng. 2015, 3 1067 development of the model demonstrates that Bayesian methods can be used to provide a robust and rigorous basis for quantifying uncertainty in mixed-size sediment transport. Such a method has heretofore been unavailable and allows for the propagation of uncertainty in sediment transport applications.

  17. A Bayesian Analysis of Spectral ARMA Model

    Directory of Open Access Journals (Sweden)

    Manoel I. Silvestre Bezerra

    2012-01-01

    Full Text Available Bezerra et al. (2008 proposed a new method, based on Yule-Walker equations, to estimate the ARMA spectral model. In this paper, a Bayesian approach is developed for this model by using the noninformative prior proposed by Jeffreys (1967. The Bayesian computations, simulation via Markov Monte Carlo (MCMC is carried out and characteristics of marginal posterior distributions such as Bayes estimator and confidence interval for the parameters of the ARMA model are derived. Both methods are also compared with the traditional least squares and maximum likelihood approaches and a numerical illustration with two examples of the ARMA model is presented to evaluate the performance of the procedures.

  18. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  19. Bayesian long branch attraction bias and corrections.

    Science.gov (United States)

    Susko, Edward

    2015-03-01

    Previous work on the star-tree paradox has shown that Bayesian methods suffer from a long branch attraction bias. That work is extended to settings involving more taxa and partially resolved trees. The long branch attraction bias is confirmed to arise more broadly and an additional source of bias is found. A by-product of the analysis is methods that correct for biases toward particular topologies. The corrections can be easily calculated using existing Bayesian software. Posterior support for a set of two or more trees can thus be supplemented with corrected versions to cross-check or replace results. Simulations show the corrections to be highly effective.

  20. Application of genetic algorithm-kernel partial least square as a novel non-linear feature selection method: partitioning of drug molecules.

    Science.gov (United States)

    Noorizadeh, H; Sobhan Ardakani, S; Ahmadi, T; Mortazavi, S S; Noorizadeh, M

    2013-02-01

    Genetic algorithm (GA) and partial least squares (PLS) and kernel PLS (KPLS) techniques were used to investigate the correlation between immobilized liposome chromatography partitioning (log Ks) and descriptors for 65 drug compounds. The models were validated using leave-group-out cross validation LGO-CV. The results indicate that GA-KPLS can be used as an alternative modelling tool for quantitative structure-property relationship (QSPR) studies.