WorldWideScience

Sample records for bayesian partition method

  1. A Bayesian partition method for detecting pleiotropic and epistatic eQTL modules.

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2010-01-01

    Full Text Available Studies of the relationship between DNA variation and gene expression variation, often referred to as "expression quantitative trait loci (eQTL mapping", have been conducted in many species and resulted in many significant findings. Because of the large number of genes and genetic markers in such analyses, it is extremely challenging to discover how a small number of eQTLs interact with each other to affect mRNA expression levels for a set of co-regulated genes. We present a Bayesian method to facilitate the task, in which co-expressed genes mapped to a common set of markers are treated as a module characterized by latent indicator variables. A Markov chain Monte Carlo algorithm is designed to search simultaneously for the module genes and their linked markers. We show by simulations that this method is more powerful for detecting true eQTLs and their target genes than traditional QTL mapping methods. We applied the procedure to a data set consisting of gene expression and genotypes for 112 segregants of S. cerevisiae. Our method identified modules containing genes mapped to previously reported eQTL hot spots, and dissected these large eQTL hot spots into several modules corresponding to possibly different biological functions or primary and secondary responses to regulatory perturbations. In addition, we identified nine modules associated with pairs of eQTLs, of which two have been previously reported. We demonstrated that one of the novel modules containing many daughter-cell expressed genes is regulated by AMN1 and BPH1. In conclusion, the Bayesian partition method which simultaneously considers all traits and all markers is more powerful for detecting both pleiotropic and epistatic effects based on both simulated and empirical data.

  2. Partitioning net ecosystem exchange of CO2: A comparison of a Bayesian/isotope approach to environmental regression methods

    Science.gov (United States)

    Zobitz, J. M.; Burns, S. P.; OgéE, J.; Reichstein, M.; Bowling, D. R.

    2007-09-01

    Separation of the net ecosystem exchange of CO2 (F) into its component fluxes of net photosynthesis (FA) and nonfoliar respiration (FR) is important in understanding the physical and environmental controls on these fluxes, and how these fluxes may respond to environmental change. In this paper, we evaluate a partitioning method based on a combination of stable isotopes of CO2 and Bayesian optimization in the context of partitioning methods based on regressions with environmental variables. We combined high-resolution measurements of stable carbon isotopes of CO2, ecosystem fluxes, and meteorological variables with a Bayesian parameter optimization approach to estimate FA and FR in a subalpine forest in Colorado, United States, over the course of 104 days during summer 2003. Results were generally in agreement with the independent environmental regression methods of Reichstein et al. (2005a) and Yi et al. (2004). Half-hourly posterior parameter estimates of FA and FR derived from the Bayesian/isotopic method showed a strong diurnal pattern in both, consistent with established gross photosynthesis (GEE) and total ecosystem respiration (TER) relationships. Isotope-derived FA was functionally dependent on light, but FR exhibited the expected temperature dependence only when the prior estimates for FR were temperature-based. Examination of the posterior correlation matrix revealed that the available data were insufficient to independently resolve all the Bayesian-estimated parameters in our model. This could be due to a small isotopic disequilibrium (?) between FA and FR, poor characterization of whole-canopy photosynthetic discrimination or the isotopic flux (isoflux, analogous to net ecosystem exchange of 13CO2). The positive sign of ? indicates that FA was more enriched in 13C than FR. Possible reasons for this are discussed in the context of recent literature.

  3. Development of partitioning method

    International Nuclear Information System (INIS)

    A partitioning method has been developed under the concepts of separating radioactive nuclides from a high-level waste according to their half lives and radioactive toxicity, and of disposing the waste safely. The partitioning test using about 18 liters (--220Ci) of the fuel reprocessing waste prepared at PNC has been started in October of 1982. In this test the behavior of radioactive nuclides was made clear. The present paper describes chemical behavior of non-radioactive elements contained in the high-level liquid waste in the extraction with di-isodecyl phosphoric acid (DIDPA). Distribution ratios of most of metal ions for DIDPA were less than 0.05, except that those of Mo, Zr and Fe were higher than 7. Ferric ion could not be back-extracted with 4 M HNO3, but with 0.5 M (COOH)2. In the extractiion with DIDPA, the third phase, which causes closing the settling banks or the flow paths in a mixer settler, was formed when the ferric ion concentration was over 0.02 M. This unfavorable phenomenon, however, was found to be suppressed by diluting the ferric ion concentration to lower than 0.01 M or by reducing ferric ion to ferrous ion. (author)

  4. Predicting mTOR inhibitors with a classifier using recursive partitioning and Naive Bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Ling Wang

    Full Text Available BACKGROUND: Mammalian target of rapamycin (mTOR is a central controller of cell growth, proliferation, metabolism, and angiogenesis. Thus, there is a great deal of interest in developing clinical drugs based on mTOR. In this paper, in silico models based on multi-scaffolds were developed to predict mTOR inhibitors or non-inhibitors. METHODS: First 1,264 diverse compounds were collected and categorized as mTOR inhibitors and non-inhibitors. Two methods, recursive partitioning (RP and naïve Bayesian (NB, were used to build combinatorial classification models of mTOR inhibitors versus non-inhibitors using physicochemical descriptors, fingerprints, and atom center fragments (ACFs. RESULTS: A total of 253 models were constructed and the overall predictive accuracies of the best models were more than 90% for both the training set of 964 and the external test set of 300 diverse compounds. The scaffold hopping abilities of the best models were successfully evaluated through predicting 37 new recently published mTOR inhibitors. Compared with the best RP and Bayesian models, the classifier based on ACFs and Bayesian shows comparable or slightly better in performance and scaffold hopping abilities. A web server was developed based on the ACFs and Bayesian method (http://rcdd.sysu.edu.cn/mtor/. This web server can be used to predict whether a compound is an mTOR inhibitor or non-inhibitor online. CONCLUSION: In silico models were constructed to predict mTOR inhibitors using recursive partitioning and naïve Bayesian methods, and a web server (mTOR Predictor was also developed based on the best model results. Compound prediction or virtual screening can be carried out through our web server. Moreover, the favorable and unfavorable fragments for mTOR inhibitors obtained from Bayesian classifiers will be helpful for lead optimization or the design of new mTOR inhibitors.

  5. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  6. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  7. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  8. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  9. Variational bayesian method of estimating variance components.

    Science.gov (United States)

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  10. SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Ming HAN; Yuanyao DING

    2004-01-01

    This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.

  11. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring;

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL models had similar accuracy and bias as GBLUP method but use of...

  12. Bayesian simultaneous equation models for the analysis of energy intake and partitioning in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Jørgensen, Henry; Kebreab, E;

    2012-01-01

    ABSTRACT SUMMARY The objective of the current study was to develop Bayesian simultaneous equation models for modelling energy intake and partitioning in growing pigs. A key feature of the Bayesian approach is that parameters are assigned prior distributions, which may reflect the current state...... of nature. In the models, rates of metabolizable energy (ME) intake, protein deposition (PD) and lipid deposition (LD) were treated as dependent variables accounting for residuals being correlated. Two complementary equation systems were used to model ME intake (MEI), PD and LD. Informative priors were...... genders (barrows, boars and gilts) selected on the basis of similar birth weight. The pigs were fed four diets based on barley, wheat and soybean meal supplemented with crystalline amino acids to meet or exceed Danish nutrient requirement standards. Nutrient balances and gas exchanges were measured at c...

  13. Spatially Partitioned Embedded Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2013-10-30

    We study spatially partitioned embedded Runge--Kutta (SPERK) schemes for partial differential equations (PDEs), in which each of the component schemes is applied over a different part of the spatial domain. Such methods may be convenient for problems in which the smoothness of the solution or the magnitudes of the PDE coefficients vary strongly in space. We focus on embedded partitioned methods as they offer greater efficiency and avoid the order reduction that may occur in nonembedded schemes. We demonstrate that the lack of conservation in partitioned schemes can lead to nonphysical effects and propose conservative additive schemes based on partitioning the fluxes rather than the ordinary differential equations. A variety of SPERK schemes are presented, including an embedded pair suitable for the time evolution of fifth-order weighted nonoscillatory spatial discretizations. Numerical experiments are provided to support the theory.

  14. Bayesian Methods for Radiation Detection and Dosimetry

    CERN Document Server

    Groer, Peter G

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...

  15. Hessian PDF reweighting meets the Bayesian methods

    CERN Document Server

    Paukkunen, Hannu

    2014-01-01

    We discuss the Hessian PDF reweighting - a technique intended to estimate the effects that new measurements have on a set of PDFs. The method stems straightforwardly from considering new data in a usual $\\chi^2$-fit and it naturally incorporates also non-zero values for the tolerance, $\\Delta\\chi^2>1$. In comparison to the contemporary Bayesian reweighting techniques, there is no need to generate large ensembles of PDF Monte-Carlo replicas, and the observables need to be evaluated only with the central and the error sets of the original PDFs. In spite of the apparently rather different methodologies, we find that the Hessian and the Bayesian techniques are actually equivalent if the $\\Delta\\chi^2$ criterion is properly included to the Bayesian likelihood function that is a simple exponential.

  16. Self-complementary plane partitions by Proctor's minuscule method

    OpenAIRE

    Kuperberg, Greg

    1994-01-01

    A method of Proctor [European J. Combin. 5 (1984), no. 4, 331-350] realizes the set of arbitrary plane partitions in a box and the set of symmetric plane partitions as bases of linear representations of Lie groups. We extend this method by realizing transposition and complementation of plane partitions as natural linear transformations of the representations, thereby enumerating symmetric plane partitions, self-complementary plane partitions, and transpose-complement plane partitions in a new...

  17. BONNSAI: correlated stellar observables in Bayesian methods

    CERN Document Server

    Schneider, F R N; Fossati, L; Langer, N; de Koter, A

    2016-01-01

    In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code BONNSAI by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounte...

  18. Bayesian Methods for Radiation Detection and Dosimetry

    International Nuclear Information System (INIS)

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model

  19. The first complete mitochondrial genome from Bostrychus genus (Bostrychus sinensis) and partitioned Bayesian analysis of Eleotridae fish phylogeny

    Indian Academy of Sciences (India)

    Tao Wei; Xiao Xiao Jin; Tian Jun Xu

    2013-08-01

    To understand the phylogenetic position of Bostrychus sinensis in Eleotridae and the phylogenetic relationships of the family, we determined the nucleotide sequence of the mitochondrial (mt) genome of Bostrychus sinensis. It is the first complete mitochondrial genome sequence of Bostrychus genus. The entire mtDNA sequence was 16508 bp in length with a standard set of 13 protein-coding genes, 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs) and a noncoding control region. The mitochondrial genome of B. sinensis had common features with those of other bony fishes with respect to gene arrangement, base composition, and tRNA structures. Phylogenetic hypotheses within Eleotridae fish have been controversial at the genus level. We used the mitochondrial cytochrome b (cytb) gene sequence to examine phylogenetic relationships of Eleotridae by using partitioned Bayesian method. When the specific models and parameter estimates were presumed for partitioning the total data, the harmonic mean –lnL was improved. The phylogenetic analysis supported the monophyly of Hypseleotris and Gobiomorphs. In addition, the Bostrychus were most closely related to Ophiocara, and the Philypnodon is also the sister to Microphlypnus, based on the current datasets. Further, extensive taxonomic sampling and more molecular information are needed to confirm the phylogenetic relationships in Eleotridae.

  20. Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

    OpenAIRE

    Zhu, Weixuan

    2016-01-01

    The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...

  1. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  2. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Directory of Open Access Journals (Sweden)

    Alexey Miroshnikov

    Full Text Available Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  3. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Science.gov (United States)

    Miroshnikov, Alexey; Conlon, Erin M

    2014-01-01

    Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  4. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  5. Bayesian individualization via sampling-based methods.

    Science.gov (United States)

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  6. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  7. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    Science.gov (United States)

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain. PMID:24246289

  8. Recursive Partitioning Method on Competing Risk Outcomes.

    Science.gov (United States)

    Xu, Wei; Che, Jiahua; Kong, Qin

    2016-01-01

    In some cancer clinical studies, researchers have interests to explore the risk factors associated with competing risk outcomes such as recurrence-free survival. We develop a novel recursive partitioning framework on competing risk data for both prognostic and predictive model constructions. We define specific splitting rules, pruning algorithm, and final tree selection algorithm for the competing risk tree models. This methodology is quite flexible that it can corporate both semiparametric method using Cox proportional hazards model and parametric competing risk model. Both prognostic and predictive tree models are developed to adjust for potential confounding factors. Extensive simulations show that our methods have well-controlled type I error and robust power performance. Finally, we apply both Cox proportional hazards model and flexible parametric model for prognostic tree development on a retrospective clinical study on oropharyngeal cancer patients. PMID:27486300

  9. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... complex prior representation achieve improved sparsity representations in low signalto- noise ratio as opposed to state-of-the-art sparse estimators. This result is of particular importance for the applicability of the algorithms in the field of channel estimation. We then derive various iterative...... and computational complexity. We also analyze the impact of transceiver filters on the sparseness of the channel response, and propose a dictionary design that permits the deployment of sparse inference methods in conditions of low bandwidth....

  10. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Directory of Open Access Journals (Sweden)

    Alexander Tilley

    Full Text Available The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15N and (13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15N values and greater δ(13C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15N ≈ 2.7‰ and Δ(13C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions.

  11. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Science.gov (United States)

    Tilley, Alexander; López-Angarita, Juliana; Turner, John R

    2013-01-01

    The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15)N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15)N values and greater δ(13)C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15)N ≈ 2.7‰ and Δ(13)C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions. PMID:24236144

  12. Bayesian method for system reliability assessment of overlapping pass/fail data

    Institute of Scientific and Technical Information of China (English)

    Zhipeng Hao; Shengkui Zeng; Jianbin Guo

    2015-01-01

    For high reliability and long life systems, system pass/fail data are often rare. Integrating lower-level data, such as data drawn from the subsystem or component pass/fail testing, the Bayesian analysis can improve the precision of the system reli-ability assessment. If the multi-level pass/fail data are overlapping, one chal enging problem for the Bayesian analysis is to develop a likelihood function. Since the computation burden of the existing methods makes them infeasible for multi-component systems, this paper proposes an improved Bayesian approach for the system reliability assessment in light of overlapping data. This approach includes three steps: fristly searching for feasible paths based on the binary decision diagram, then screening feasible points based on space partition and constraint decomposition, and final y sim-plifying the likelihood function. An example of a satel ite rol ing control system demonstrates the feasibility and the efficiency of the proposed approach.

  13. A Bayesian method for microseismic source inversion

    Science.gov (United States)

    Pugh, D. J.; White, R. S.; Christie, P. A. F.

    2016-08-01

    Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability

  14. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  15. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, Luc L G; Strathe, Anders B;

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL had similar accuracy and bias as GBLUP but power parameters had...

  16. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  17. HEURISTIC DISCRETIZATION METHOD FOR BAYESIAN NETWORKS

    Directory of Open Access Journals (Sweden)

    Mariana D.C. Lima

    2014-01-01

    Full Text Available Bayesian Network (BN is a classification technique widely used in Artificial Intelligence. Its structure is a Direct Acyclic Graph (DAG used to model the association of categorical variables. However, in cases where the variables are numerical, a previous discretization is necessary. Discretization methods are usually based on a statistical approach using the data distribution, such as division by quartiles. In this article we present a discretization using a heuristic that identifies events called peak and valley. Genetic Algorithm was used to identify these events having the minimization of the error between the estimated average for BN and the actual value of the numeric variable output as the objective function. The BN has been modeled from a database of Bit’s Rate of Penetration of the Brazilian pre-salt layer with 5 numerical variables and one categorical variable, using the proposed discretization and the division of the data by the quartiles. The results show that the proposed heuristic discretization has higher accuracy than the quartiles discretization.

  18. An Efficient Bayesian Iterative Method for Solving Linear Systems

    Institute of Scientific and Technical Information of China (English)

    Deng DING; Kin Sio FONG; Ka Hou CHAN

    2012-01-01

    This paper concerns with the statistical methods for solving general linear systems.After a brief review of Bayesian perspective for inverse problems,a new and efficient iterative method for general linear systems from a Bayesian perspective is proposed.The convergence of this iterative method is proved,and the corresponding error analysis is studied.Finally,numerical experiments are given to support the efficiency of this iterative method,and some conclusions are obtained.

  19. Stochastic back analysis of permeability coefficient using generalized Bayesian method

    Institute of Scientific and Technical Information of China (English)

    Zheng Guilan; Wang Yuan; Wang Fei; Yang Jian

    2008-01-01

    Owing to the fact that the conventional deterministic back analysis of the permeability coefficient cannot reflect the uncertainties of parameters, including the hydraulic head at the boundary, the permeability coefficient and measured hydraulic head, a stochastic back analysis taking consideration of uncertainties of parameters was performed using the generalized Bayesian method. Based on the stochastic finite element method (SFEM) for a seepage field, the variable metric algorithm and the generalized Bayesian method, formulas for stochastic back analysis of the permeability coefficient were derived. A case study of seepage analysis of a sluice foundation was performed to illustrate the proposed method. The results indicate that, with the generalized Bayesian method that considers the uncertainties of measured hydraulic head, the permeability coefficient and the hydraulic head at the boundary, both the mean and standard deviation of the permeability coefficient can be obtained and the standard deviation is less than that obtained by the conventional Bayesian method. Therefore, the present method is valid and applicable.

  20. Application of Bayesian Network Learning Methods to Land Resource Evaluation

    Institute of Scientific and Technical Information of China (English)

    HUANG Jiejun; HE Xiaorong; WAN Youchuan

    2006-01-01

    Bayesian network has a powerful ability for reasoning and semantic representation, which combined with qualitative analysis and quantitative analysis, with prior knowledge and observed data, and provides an effective way to deal with prediction, classification and clustering. Firstly, this paper presented an overview of Bayesian network and its characteristics, and discussed how to learn a Bayesian network structure from given data, and then constructed a Bayesian network model for land resource evaluation with expert knowledge and the dataset. The experimental results based on the test dataset are that evaluation accuracy is 87.5%, and Kappa index is 0.826. All these prove the method is feasible and efficient, and indicate that Bayesian network is a promising approach for land resource evaluation.

  1. A novel multimode process monitoring method integrating LDRSKM with Bayesian inference

    Institute of Scientific and Technical Information of China (English)

    Shi-jin REN; Yin LIANG; Xiang-jun ZHAO; Mao-yun YANG

    2015-01-01

    A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and non-local geometric information of the data and generalized linear discriminant analysis to provide a better and more meaningful data partition. LDRSKM can perform clustering and subspace selection simultaneously, enhancing the separability of data residing in different clusters. With the data partition obtained, kernel support vector data description (KSVDD) is used to establish the monitoring statistics and control limits. Two Bayesian inference based global fault detection indicators are then developed using the local monitoring results associated with principal and residual subspaces. Based on clustering analysis, Bayesian inference and mani-fold learning methods, the within and cross-mode correlations, and local geometric information can be exploited to enhance monitoring performances for nonlinear and non-Gaussian processes. The effectiveness and efficiency of the proposed method are evaluated using the Tennessee Eastman benchmark process.

  2. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...

  3. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...

  4. Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology

    CERN Document Server

    Hortúa, Héctor J

    2014-01-01

    These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.

  5. A new method for counting trees with vertex partition

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A direct and elementary method is provided in this paper for counting trees with vertex partition instead of recursion, generating function, functional equation, Lagrange inversion, and matrix methods used before.

  6. Algebraic methods for evaluating integrals In Bayesian statistics

    OpenAIRE

    Lin, Shaowei

    2011-01-01

    The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning and computational biology. Following the recent success of algebraic statistics in frequentist inference and inspired by Watanabe's foundational approach to singular learning theory, the goal of this dissertation is to study algebraic, geometric and combinatorial methods for computing Bayesian integrals effectively, and to explor...

  7. Recursive Partitioning Method on Competing Risk Outcomes

    OpenAIRE

    Xu, Wei; Che, Jiahua; KONG, QIN

    2016-01-01

    In some cancer clinical studies, researchers have interests to explore the risk factors associated with competing risk outcomes such as recurrence-free survival. We develop a novel recursive partitioning framework on competing risk data for both prognostic and predictive model constructions. We define specific splitting rules, pruning algorithm, and final tree selection algorithm for the competing risk tree models. This methodology is quite flexible that it can corporate both semiparametric m...

  8. Bayesian methods for the design and analysis of noninferiority trials.

    Science.gov (United States)

    Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen

    2016-01-01

    The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.

  9. Gait Partitioning Methods: A Systematic Review

    Science.gov (United States)

    Taborri, Juri; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo

    2016-01-01

    In the last years, gait phase partitioning has come to be a challenging research topic due to its impact on several applications related to gait technologies. A variety of sensors can be used to feed algorithms for gait phase partitioning, mainly classifiable as wearable or non-wearable. Among wearable sensors, footswitches or foot pressure insoles are generally considered as the gold standard; however, to overcome some inherent limitations of the former, inertial measurement units have become popular in recent decades. Valuable results have been achieved also though electromyography, electroneurography, and ultrasonic sensors. Non-wearable sensors, such as opto-electronic systems along with force platforms, remain the most accurate system to perform gait analysis in an indoor environment. In the present paper we identify, select, and categorize the available methodologies for gait phase detection, analyzing advantages and disadvantages of each solution. Finally, we comparatively examine the obtainable gait phase granularities, the usable computational methodologies and the optimal sensor placements on the targeted body segments. PMID:26751449

  10. Gait Partitioning Methods: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2016-01-01

    Full Text Available In the last years, gait phase partitioning has come to be a challenging research topic due to its impact on several applications related to gait technologies. A variety of sensors can be used to feed algorithms for gait phase partitioning, mainly classifiable as wearable or non-wearable. Among wearable sensors, footswitches or foot pressure insoles are generally considered as the gold standard; however, to overcome some inherent limitations of the former, inertial measurement units have become popular in recent decades. Valuable results have been achieved also though electromyography, electroneurography, and ultrasonic sensors. Non-wearable sensors, such as opto-electronic systems along with force platforms, remain the most accurate system to perform gait analysis in an indoor environment. In the present paper we identify, select, and categorize the available methodologies for gait phase detection, analyzing advantages and disadvantages of each solution. Finally, we comparatively examine the obtainable gait phase granularities, the usable computational methodologies and the optimal sensor placements on the targeted body segments.

  11. Diet Reconstruction and Resource Partitioning of a Caribbean Marine Mesopredator Using Stable Isotope Bayesian Modelling

    OpenAIRE

    Alexander Tilley; Juliana López-Angarita; Turner, John R.

    2013-01-01

    The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carchar...

  12. Constructing Bayesian formulations of sparse kernel learning methods.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2005-01-01

    We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387

  13. Approximation methods for the partition functions of anharmonic systems

    Energy Technology Data Exchange (ETDEWEB)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations.

  14. Approximation methods for the partition functions of anharmonic systems

    International Nuclear Information System (INIS)

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations

  15. Symplectic trigonometrically fitted partitioned Runge-Kutta methods

    International Nuclear Information System (INIS)

    The numerical integration of Hamiltonian systems is considered in this Letter. Trigonometrically fitted symplectic partitioned Runge-Kutta methods of second, third and fourth orders are constructed. The methods are tested on the numerical integration of the harmonic oscillator, the two body problem and an orbital problem studied by Stiefel and Bettis

  16. Application of an efficient Bayesian discretization method to biomedical data

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2011-07-01

    Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.

  17. Methods for Bayesian power spectrum inference with galaxy surveys

    CERN Document Server

    Jasche, Jens

    2013-01-01

    We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves over previous Bayesian methods by performing a joint inference of the three dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate sub samples. The method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal to noise regimes by using a determini...

  18. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2001-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partl

  19. Experimental Tests of Subjective Bayesian Methods

    Science.gov (United States)

    Li,Yuelin; Krantz, David H.

    2005-01-01

    We evaluated Samaniego and Reneau's 1994 novel weight method for eliciting subjective probability estimates. Experiment 1 replicated their experiment (subjects weighed their prior estimate against 10 new observations), with an additional weight judgment against 50 observations. In Experiment 2, subjects gave prior estimates to questions in a…

  20. Bayesian Biclustering on Discrete Data: Variable Selection Methods

    OpenAIRE

    Guo, Lei

    2013-01-01

    Biclustering is a technique for clustering rows and columns of a data matrix simultaneously. Over the past few years, we have seen its applications in biology-related fields, as well as in many data mining projects. As opposed to classical clustering methods, biclustering groups objects that are similar only on a subset of variables. Many biclustering algorithms on continuous data have emerged over the last decade. In this dissertation, we will focus on two Bayesian biclustering algorithms we...

  1. A nonparametric Bayesian method for estimating a response function

    OpenAIRE

    Brown, Scott; Meeden, Glen

    2012-01-01

    Consider the problem of estimating a response function which depends upon a non-stochastic independent variable under our control. The data are independent Bernoulli random variables where the probabilities of success are given by the response function at the chosen values of the independent variable. Here we present a nonparametric Bayesian method for estimating the response function. The only prior information assumed is that the response function can be well approximated by a mixture of st...

  2. Bayesian methods for outliers detection in GNSS time series

    Science.gov (United States)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

  3. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    _cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown......, and exact and optimisation-based heuristic solution methods for the model are described. All these methods are centered around the wellknown column generation technique. Di_erent practical applications of crew scheduling are presented, and some of these applications are considered in detail in four included...

  4. Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method

    Institute of Scientific and Technical Information of China (English)

    WANG Wei(王玮); CAI LianHong(蔡莲红)

    2003-01-01

    Bayesian approach is an important method in statistics. The Bayesian belief network is a powerful knowledge representation and reasoning tool under the conditions of uncertainty.It is a graphics model that encodes probabilistic relationships among variables of interest. In this paper, an approach to Bayesian network construction is given for discovering the Chinese linguistic parameter relationship in the corpus.

  5. Methods for Bayesian Power Spectrum Inference with Galaxy Surveys

    Science.gov (United States)

    Jasche, Jens; Wandelt, Benjamin D.

    2013-12-01

    We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves upon previous Bayesian methods by performing a joint inference of the three-dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases, and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate subsamples. This method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal-to-noise regimes by using a deterministic reversible jump algorithm. This approach reduces the correlation length of the sampler by several orders of magnitude, turning the otherwise numerically unfeasible problem of joint parameter exploration into a numerically manageable task. We test our method on an artificial mock galaxy survey, emulating characteristic features of the Sloan Digital Sky Survey data release 7, such as its survey geometry and luminosity-dependent biases. These tests demonstrate the numerical feasibility of our large scale Bayesian inference frame work when the parameter space has millions of dimensions. This method reveals and correctly treats the anti-correlation between bias amplitudes and power spectrum, which are not taken into account in current approaches to power spectrum estimation, a 20% effect across large ranges in k space. In addition, this method results in constrained realizations of density fields obtained without assuming the power spectrum or bias parameters

  6. An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior

    Directory of Open Access Journals (Sweden)

    Yong-Hoon Kim

    2008-05-01

    Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a “prior” distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.

  7. PARALLEL COMPOUND METHODS FOR SOLVING PARTITIONED STIFF SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li-rong Chen; De-gui Liu

    2001-01-01

    This paper deals with the solution of partitioned systems of nonlinear stiff differential equations. Given a differential system, the user may specify some equations to be stiff and others to be nonstiff. For the numerical solution of such a system Parallel Compound Methods(PCMs) are studied. Nonstiff equations are integrated by a parallel explicit RK method while a parallel Rosenbrock method is used for the stiff part of the system. Their order conditions, their convergence and their numerical stability are discussed,and the numerical tests are conducted on a personal computer and a parallel computer.

  8. Distance and extinction determination for APOGEE stars with Bayesian method

    Science.gov (United States)

    Wang, Jianling; Shi, Jianrong; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James

    2016-08-01

    Using a Bayesian technology, we derived distances and extinctions for over 100 000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from Two Micron All-Sky Survey, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC and Strömgren survey for Asteroseismology and Galactic Archaeology catalogues. These comparisons covers four orders of magnitude in the distance scale from 0.02 to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2 per cent to +3.6 per cent, and the dispersion ranges from 15 per cent to 25 per cent. The extinctions towards all stars are also derived and compared with those from several other independent methods: the Rayleigh-Jeans Colour Excess (RJCE) method, Gonzalez's 2D extinction map, as well as 3D extinction maps and models. The comparisons reveal that, overall, estimated extinctions agree very well, but RJCE tends to overestimate extinctions for cool stars and objects with low log g.

  9. Distance and extinction determination for APOGEE stars with Bayesian method

    CERN Document Server

    Wang, Jianling; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James

    2016-01-01

    Using a Bayesian technology we derived distances and extinctions for over 100,000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from 2MASS, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC (Rodrigues et al. 2014) and SAGA Catalogues (Casagrande et al. 2014). These comparisons covers four orders of magnitude in the distance scale from 0.02 kpc to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2% to +3.6%, and the dispersion ranges from 15% to 25%. The extinctions toward all stars are also derived and compared wi...

  10. Computational Methods for Domain Partitioning of Protein Structures

    Science.gov (United States)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  11. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  12. Bayesian Monte Carlo method for nuclear data evaluation

    Science.gov (United States)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  13. ANALYSIS OF CLIQUE BY MATRIX FACTORIZATION AND PARTITION METHODS

    Directory of Open Access Journals (Sweden)

    Raghunath Kar

    2011-10-01

    Full Text Available In real life clustering of high dimensional data is a big problem. Tofind out the dense regions from increasing dimensions is one of them.We have already studied the clustering techniques of low dimensionaldata sets like k-means, k-mediod, BIRCH, CLARANS, CURE, DBScan, PAM etc. If a region is dense then it consists with number of data points with a minimum support of input parameter ø other wise itcannot take into clustering. So in this approach we have implementedCLIQUE to find out the clusters from multidimensional data sets. Indimension growth subspace clustering the clustering process start atsingle dimensional subspaces and grows upward to higher dimensionalones. It is a partition method where each dimension divided like a grid structure. In this paper the elimination of redundant objects from the regions by matrix factorization and partition method are implemented. The comparisons between CLIQUES with these two methods are studied. The redundant data point belongs to which region to form a cluster is also studied.

  14. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  15. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  16. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  17. Chain ladder method: Bayesian bootstrap versus classical bootstrap

    OpenAIRE

    Peters, Gareth W.; Mario V. W\\"uthrich; Shevchenko, Pavel V.

    2010-01-01

    The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. T...

  18. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  19. On Bayesian methods of exploring qualitative interactions for targeted treatment.

    Science.gov (United States)

    Chen, Wei; Ghosh, Debashis; Raghunathan, Trivellore E; Norkin, Maxim; Sargent, Daniel J; Bepler, Gerold

    2012-12-10

    Providing personalized treatments designed to maximize benefits and minimizing harms is of tremendous current medical interest. One problem in this area is the evaluation of the interaction between the treatment and other predictor variables. Treatment effects in subgroups having the same direction but different magnitudes are called quantitative interactions, whereas those having opposite directions in subgroups are called qualitative interactions (QIs). Identifying QIs is challenging because they are rare and usually unknown among many potential biomarkers. Meanwhile, subgroup analysis reduces the power of hypothesis testing and multiple subgroup analyses inflate the type I error rate. We propose a new Bayesian approach to search for QI in a multiple regression setting with adaptive decision rules. We consider various regression models for the outcome. We illustrate this method in two examples of phase III clinical trials. The algorithm is straightforward and easy to implement using existing software packages. We provide a sample code in Appendix A.

  20. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided. PMID:26019004

  1. A high-resolution direction-of-arrival estimation based on Bayesian method

    Institute of Scientific and Technical Information of China (English)

    HUANG Jianguo; SUN Yi; XU Pu; LU Ying; LIU Kewei

    2004-01-01

    A Bayesian high-resolution direction-of-arrival (DOA) estimator is proposed based on the maximum a posteriori principle. The statistical performance of the Bayesian highresolution DOA estimator is also investigated. Comparison with MUSIC and Maximum likelihood estimator (MLE) shows that the Bayesian method has higher resolution and more accurate estimates for either incoherent or coherent sources. It is also more robust in the case of low SNR.

  2. Developments from Programming the Partition Method for a Power Series Expansion

    CERN Document Server

    Kowalenko, Victor

    2012-01-01

    Recently, a novel method based on coding partitions [1]-[4] has been used to derive power series expansions to previously intractable problems. In this method the coefficients at $k$ are determined by summing the contributions made by each partition whose elements sum to $k$. These contributions are found by assigning values to each element and multiplying by an appropriate multinomial factor. This work presents a theoretical framework for the partition method for a power series expansion. To overcome the complexity due to the contributions, a programming methodology is created allowing more general problems to be studied than envisaged originally. The methodology uses the bi-variate recursive central partition (BRCP) algorithm, which is based on a tree-diagram approach to scanning partitions. Its main advantage is that partitions are generated in the multiplicity representation. During the development of the theoretical framework, scanning over partitions was seen as a discrete operation with an operator $L_...

  3. Bayesian signal processing classical, modern, and particle filtering methods

    CERN Document Server

    Candy, James V

    2016-01-01

    This book aims to give readers a unified Bayesian treatment starting from the basics (Baye's rule) to the more advanced (Monte Carlo sampling), evolving to the next-generation model-based techniques (sequential Monte Carlo sampling). This next edition incorporates a new chapter on "Sequential Bayesian Detection," a new section on "Ensemble Kalman Filters" as well as an expansion of Case Studies that detail Bayesian solutions for a variety of applications. These studies illustrate Bayesian approaches to real-world problems incorporating detailed particle filter designs, adaptive particle filters and sequential Bayesian detectors. In addition to these major developments a variety of sections are expanded to "fill-in-the gaps" of the first edition. Here metrics for particle filter (PF) designs with emphasis on classical "sanity testing" lead to ensemble techniques as a basic requirement for performance analysis. The expansion of information theory metrics and their application to PF designs is fully developed an...

  4. Clustering method based on data division and partition

    Institute of Scientific and Technical Information of China (English)

    卢志茂; 刘晨; 张春祥; 王蕾

    2014-01-01

    Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets (VLDS). In this work, a novel division and partition clustering method (DP) was proposed to solve the problem. DP cut the source data set into data blocks, and extracted the eigenvector for each data block to form the local feature set. The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector. Ultimately according to the global eigenvector, the data set was assigned by criterion of minimum distance. The experimental results show that it is more robust than the conventional clusterings. Characteristics of not sensitive to data dimensions, distribution and number of nature clustering make it have a wide range of applications in clustering VLDS.

  5. OCL-BASED TEST CASE GENERATION USING CATEGORY PARTITIONING METHOD

    Directory of Open Access Journals (Sweden)

    A. Jalila

    2015-10-01

    Full Text Available The adoption of fault detection techniques during initial stages of software development life cycle urges to improve reliability of a software product. Specification-based testing is one of the major criterions to detect faults in the requirement specification or design of a software system. However, due to the non-availability of implementation details, test case generation from formal specifications become a challenging task. As a novel approach, the proposed work presents a methodology to generate test cases from OCL (Object constraint Language formal specification using Category Partitioning Method (CPM. The experiment results indicate that the proposed methodology is more effective in revealing specification based faults. Furthermore, it has been observed that OCL and CPM form an excellent combination for performing functional testing at the earliest to improve software quality with reduced cost.

  6. Bayesian Method with Spatial Constraint for Retinal Vessel Segmentation

    Directory of Open Access Journals (Sweden)

    Zhiyong Xiao

    2013-01-01

    Full Text Available A Bayesian method with spatial constraint is proposed for vessel segmentation in retinal images. The proposed model makes the assumption that the posterior probability of each pixel is dependent on posterior probabilities of their neighboring pixels. An energy function is defined for the proposed model. By applying the modified level set approach to minimize the proposed energy function, we can identify blood vessels in the retinal image. Evaluation of the developed method is done on real retinal images which are from the DRIVE database and the STARE database. The performance is analyzed and compared to other published methods using a number of measures which include accuracy, sensitivity, and specificity. The proposed approach is proved to be effective on these two databases. The average accuracy, sensitivity, and specificity on the DRIVE database are 0.9529, 0.7513, and 0.9792, respectively, and for the STARE database 0.9476, 0.7147, and 0.9735, respectively. The performance is better than that of other vessel segmentation methods.

  7. Metainference: A Bayesian inference method for heterogeneous systems.

    Science.gov (United States)

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  8. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  9. The characterization of petroleum contamination in heterogenous media using partitioning tracer method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, E.; Rhee, S.; Park, J. [Seoul National Univ. (Korea, Republic of). Dept. of Civil and Environmental Engineering

    2009-07-01

    A partitioning tracer method for characterizing petroleum contamination in heterogenous media was discussed. The average saturation level of nonaqueous phase liquids (NAPLs) was calculated by comparing the transport of the partitioning tracers to a conservative tracer. The NAPL saturation level represented a continuous value throughout the contaminated site. Experiments were conducted in a 2-D sandbox divided into 4 parts using different-sized sands. Soils were contaminated with a mixture of kerosene and diesel. Partitioning tracer tests were conducted both before and after contamination. A partitioning batch test was conducted to determine the partition coefficient (K) of the tracer between the NAPL and water. Breakthrough curves were obtained, and a retardation factor (R) was calculated. Results of the study showed that the calculated NAPL saturation was in good agreement with determined values. It was concluded that the partitioning tracer test is an accurate method of locating and quantifying NAPLs.

  10. ESTIMATE OF THE HYPSOMETRIC RELATIONSHIP WITH NONLINEAR MODELS FITTED BY EMPIRICAL BAYESIAN METHODS

    Directory of Open Access Journals (Sweden)

    Monica Fabiana Bento Moreira

    2015-09-01

    Full Text Available In this paper we propose a Bayesian approach to solve the inference problem with restriction on parameters, regarding to nonlinear models used to represent the hypsometric relationship in clones of Eucalyptus sp. The Bayesian estimates are calculated using Monte Carlo Markov Chain (MCMC method. The proposed method was applied to different groups of actual data from which two were selected to show the results. These results were compared to the results achieved by the minimum square method, highlighting the superiority of the Bayesian approach, since this approach always generate the biologically consistent results for hipsometric relationship.

  11. Applications of Domain Decomposition and Partition of Unity Methods in Physics and Geometry

    CERN Document Server

    Holst, Michael

    2010-01-01

    We consider a class of adaptive multilevel domain decomposition-like algorithms, built from a combination of adaptive multilevel finite element, domain decomposition, and partition of unity methods. These algorithms have several interesting features such as very low communication requirements, and they inherit a simple and elegant approximation theory framework from partition of unity methods. They are also very easy to use with highly complex sequential adaptive finite element packages, requiring little or no modification of the underlying sequential finite element software. The parallel algorithm can be implemented as a simple loop which starts off a sequential local adaptive solve on a collection of processors simultaneously. We first review the Partition of Unity Method (PUM) of Babuvska and Melenk, and outline the PUM approximation theory framework. We then describe a variant we refer to here as the Parallel Partition of Unity Method (PPUM), which is a combination of the Partition of Unity Method with th...

  12. Multifrequency Bayesian compressive sensing methods for microwave imaging.

    Science.gov (United States)

    Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea

    2014-11-01

    The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.

  13. Errata: A survey of Bayesian predictive methods for model assessment, selection and comparison

    Directory of Open Access Journals (Sweden)

    Aki Vehtari

    2014-03-01

    Full Text Available Errata for “A survey of Bayesian predictive methods for model assessment, selection and comparison” by A. Vehtari and J. Ojanen, Statistics Surveys, 6 (2012, 142–228. doi:10.1214/12-SS102.

  14. A Bayesian Assignment Method for Ambiguous Bisulfite Short Reads.

    Directory of Open Access Journals (Sweden)

    Hong Tran

    Full Text Available DNA methylation is an epigenetic modification critical for normal development and diseases. The determination of genome-wide DNA methylation at single-nucleotide resolution is made possible by sequencing bisulfite treated DNA with next generation high-throughput sequencing. However, aligning bisulfite short reads to a reference genome remains challenging as only a limited proportion of them (around 50-70% can be aligned uniquely; a significant proportion, known as multireads, are mapped to multiple locations and thus discarded from downstream analyses, causing financial waste and biased methylation inference. To address this issue, we develop a Bayesian model that assigns multireads to their most likely locations based on the posterior probability derived from information hidden in uniquely aligned reads. Analyses of both simulated data and real hairpin bisulfite sequencing data show that our method can effectively assign approximately 70% of the multireads to their best locations with up to 90% accuracy, leading to a significant increase in the overall mapping efficiency. Moreover, the assignment model shows robust performance with low coverage depth, making it particularly attractive considering the prohibitive cost of bisulfite sequencing. Additionally, results show that longer reads help improve the performance of the assignment model. The assignment model is also robust to varying degrees of methylation and varying sequencing error rates. Finally, incorporating prior knowledge on mutation rate and context specific methylation level into the assignment model increases inference accuracy. The assignment model is implemented in the BAM-ABS package and freely available at https://github.com/zhanglabvt/BAM_ABS.

  15. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  16. METHOD FOR MEASURING AIR-IMMISCIBLE LIQUID PARTITION COEFFICIENTS

    Science.gov (United States)

    The principal objective of this work was to measure nonaqueous phase liquid-air partition coefficients for various gas tracer compounds. Known amounts of trichloroethene (TCE) and tracer, as neat compounds, were introduced into glass vials and allowed to equilibrate. The TCE and ...

  17. Symplectic Partitioned Runge-Kutta Methods with Minimum Phase Lag - Case of 5 Stages

    International Nuclear Information System (INIS)

    In this work we consider explicit Symplectic Partitioned Runge-Kutta methods (SPRK) with five stages for problems with separable Hamiltonian. We construct a new method with constant coefficients third algebraic order and eighth phase-lag order.

  18. Updating reliability data using feedback analysis: feasibility of a Bayesian subjective method

    International Nuclear Information System (INIS)

    For years, EDF has used Probabilistic Safety Assessment to evaluate a global indicator of the safety of its nuclear power plants and to optimize the performance while ensuring a certain safety level. Therefore, robustness and relevancy of PSA are very important. That is the reason why EDF wants to improve the relevancy of the reliability parameters used in these models. This article aims to propose a Bayesian approach to build PSA parameters when feedback data is not large enough to use the frequentist method. Our method is called subjective because its purpose is to give engineers pragmatic criteria to apply Bayesian in a controlled and consistent way. Using Bayesian is quite common for example in the United States, because the nuclear power plants are less standardized. Bayesian is often used with generic data as prior. So we have to adapt the general methodology within EDF context. (authors)

  19. Understanding data better with Bayesian and global statistical methods

    CERN Document Server

    Press, W H

    1996-01-01

    To understand their data better, astronomers need to use statistical tools that are more advanced than traditional ``freshman lab'' statistics. As an illustration, the problem of combining apparently incompatible measurements of a quantity is presented from both the traditional, and a more sophisticated Bayesian, perspective. Explicit formulas are given for both treatments. Results are shown for the value of the Hubble Constant, and a 95% confidence interval of 66 < H0 < 82 (km/s/Mpc) is obtained.

  20. Construction of symplectic (partitioned) Runge-Kutta methods with continuous stage

    OpenAIRE

    Tang, Wensheng; Lang, Guangming; Luo, Xuqiong

    2015-01-01

    Hamiltonian systems are one of the most important class of dynamical systems with a geometric structure called symplecticity and the numerical algorithms which can preserve such geometric structure are of interest. In this article we study the construction of symplectic (partitioned) Runge-Kutta methods with continuous stage, which provides a new and simple way to construct symplectic (partitioned) Runge-Kutta methods in classical sense. This line of construction of symplectic methods relies ...

  1. A probabilistic crack size quantification method using in-situ Lamb wave test and Bayesian updating

    Science.gov (United States)

    Yang, Jinsong; He, Jingjing; Guan, Xuefei; Wang, Dengjiang; Chen, Huipeng; Zhang, Weifang; Liu, Yongming

    2016-10-01

    This paper presents a new crack size quantification method based on in-situ Lamb wave testing and Bayesian method. The proposed method uses coupon test to develop a baseline quantification model between the crack size and damage sensitive features. In-situ Lamb wave testing data on actual structures are used to update the baseline model parameters using Bayesian method to achieve more accurate crack size predictions. To demonstrate the proposed method, Lamb wave testing on simple plates with artificial cracks of different sizes is performed using surface-bonded piezoelectric wafers, and the data are used to obtain the baseline model. Two damage sensitive features, namely, the phase change and normalized amplitude are identified using signal processing techniques and used in the model. To validate the effectiveness of the method, the damage data from an in-situ fatigue testing on a realistic lap-joint component are used to update the baseline model using Bayesian method.

  2. Characterization of a Bayesian genetic clustering algorithm based on a Dirichlet process prior and comparison among Bayesian clustering methods

    Directory of Open Access Journals (Sweden)

    Morita Mitsuo

    2011-06-01

    Full Text Available Abstract Background A Bayesian approach based on a Dirichlet process (DP prior is useful for inferring genetic population structures because it can infer the number of populations and the assignment of individuals simultaneously. However, the properties of the DP prior method are not well understood, and therefore, the use of this method is relatively uncommon. We characterized the DP prior method to increase its practical use. Results First, we evaluated the usefulness of the sequentially-allocated merge-split (SAMS sampler, which is a technique for improving the mixing of Markov chain Monte Carlo algorithms. Although this sampler has been implemented in a preceding program, HWLER, its effectiveness has not been investigated. We showed that this sampler was effective for population structure analysis. Implementation of this sampler was useful with regard to the accuracy of inference and computational time. Second, we examined the effect of a hyperparameter for the prior distribution of allele frequencies and showed that the specification of this parameter was important and could be resolved by considering the parameter as a variable. Third, we compared the DP prior method with other Bayesian clustering methods and showed that the DP prior method was suitable for data sets with unbalanced sample sizes among populations. In contrast, although current popular algorithms for population structure analysis, such as those implemented in STRUCTURE, were suitable for data sets with uniform sample sizes, inferences with these algorithms for unbalanced sample sizes tended to be less accurate than those with the DP prior method. Conclusions The clustering method based on the DP prior was found to be useful because it can infer the number of populations and simultaneously assign individuals into populations, and it is suitable for data sets with unbalanced sample sizes among populations. Here we presented a novel program, DPART, that implements the SAMS

  3. Bayesian network modeling method based on case reasoning for emergency decision-making

    Directory of Open Access Journals (Sweden)

    XU Lei

    2013-06-01

    Full Text Available Bayesian network has the abilities of probability expression, uncertainty management and multi-information fusion.It can support emergency decision-making, which can improve the efficiency of decision-making.Emergency decision-making is highly time sensitive, which requires shortening the Bayesian Network modeling time as far as possible.Traditional Bayesian network modeling methods are clearly unable to meet that requirement.Thus, a Bayesian network modeling method based on case reasoning for emergency decision-making is proposed.The method can obtain optional cases through case matching by the functions of similarity degree and deviation degree.Then,new Bayesian network can be built through case adjustment by case merging and pruning.An example is presented to illustrate and test the proposed method.The result shows that the method does not have a huge search space or need sample data.The only requirement is the collection of expert knowledge and historical case models.Compared with traditional methods, the proposed method can reuse historical case models, which can reduce the modeling time and improve the efficiency.

  4. A survey of Bayesian predictive methods for model assessment, selection and comparison

    Directory of Open Access Journals (Sweden)

    Aki Vehtari

    2012-01-01

    Full Text Available To date, several methods exist in the statistical literature formodel assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methodsare based are not always clearly stated in the original articles, however.The aim of this survey is to provide a unified review of Bayesian predictivemodel assessment and selection methods, and of methods closely related tothem. We review the various assumptions that are made in this context anddiscuss the connections between different approaches, with an emphasis onhow each method approximates the expected utility of using a Bayesianmodel for the purpose of predicting future data.

  5. Analyzing bioassay data using Bayesian methods -- A primer

    Energy Technology Data Exchange (ETDEWEB)

    Miller, G.; Inkret, W.C.; Schillaci, M.E.

    1997-10-16

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level k{sub {alpha}}-L{sub c}/{sigma}{sub 0}, where {sigma}{sub 0} is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times {sigma}{sub 0} rather than approximately two times {sigma}{sub 0}, as in classical statistics, would often seem a better choice for the decision level.

  6. PARTITION OF UNITY FINITE ELEMENT METHOD FOR SHORT WAVE PROPAGATION IN SOLIDS

    Institute of Scientific and Technical Information of China (English)

    LI Xi-kui; ZHOU Hao-yang

    2005-01-01

    A partition of unity finite element method for numerical simulation of short wave propagation in solids is presented. The finite element spaces were constructed by multiplying the standard isoparametric finite element shape functions, which form a partition of unity, with the local subspaces defined on the corresponding shape functions, which include a priori knowledge about the wave motion equation in trial spaces and approximately reproduce the highly oscillatory properties within a single element. Numerical examples demonstrate the performance of the proposed partition of unity finite element in both computational accuracy and efficiency.

  7. Bayesian methods for the conformational classification of eight-membered rings

    DEFF Research Database (Denmark)

    Pérez, J.; Nolsøe, Kim; Kessler, M.;

    2005-01-01

    Two methods for the classification of eight-membered rings based on a Bayesian analysis are presented. The two methods share the same probabilistic model for the measurement of torsion angles, but while the first method uses the canonical forms of cyclooctane and, given an empirical sequence of e...

  8. Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs

    Directory of Open Access Journals (Sweden)

    Jeffrey O. Oseh (M.Sc.

    2015-04-01

    Full Text Available Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Bayesian Belief Network Method employed were validated and tested by unseen data to determine their accuracy and trend stability and were also compared with the findings obtained from Scaling equations. The obtained Bayesian Belief Network results indicated that the method showed an improved performance of predicting the amount of asphaltene precipitated in light oil reservoirs thus reducing the number of experiments required.

  9. Algebraic method for exact solution of canonical partition function in nuclear multifragmentation

    CERN Document Server

    Parvan, A S

    2002-01-01

    An algebraic method for the exact recursion formula for the calculation of canonical partition function of non-interaction finite systems of particles obeying Bose-Einstein, Fermi-Dirac, Maxwell-Boltzmann statistics or parastatistics is derived. A new exactly solvable multifragmentation model with baryon and electric charge conservation laws is developed. Recursion relations for this model are presented that allow exact calculation of canonical partition function for any statistics.

  10. Comparison between standard unfolding and Bayesian methods in Bonner spheres neutron spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Medkour Ishak-Boushaki, G., E-mail: gmedkour@yahoo.com [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria); Allab, M. [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria)

    2012-10-11

    This paper compares the use of both standard unfolding and Bayesian methods to analyze data extracted from neutron spectrometric measurements with a view to deriving some integral quantities characterizing a neutron field. We consider, as an example, the determination of the total neutron fluence and dose in the vicinity of an Am-Be source from Bonner spheres measurements. It is shown that the Bayesian analysis provides a rigorous estimation of these quantities and their correlated uncertainties and overcomes difficulties encountered in the standard unfolding methods.

  11. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabalinejad, M.; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo s

  12. Localized operator partitioning method for electronic excitation energies in the time-dependent density functional formalism

    CERN Document Server

    Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F

    2016-01-01

    We extend the localized operator partitioning method (LOPM) [J. Nagesh, A.F. Izmaylov, and P. Brumer, J. Chem. Phys. 142, 084114 (2015)] to the time-dependent density functional theory (TD-DFT) framework to partition molecular electronic energies of excited states in a rigorous manner. A molecular fragment is defined as a collection of atoms using Stratman-Scuseria-Frisch atomic partitioning. A numerically efficient scheme for evaluating the fragment excitation energy is derived employing a resolution of the identity to preserve standard one- and two-electron integrals in the final expressions. The utility of this partitioning approach is demonstrated by examining several excited states of two bichromophoric compounds: 9-((1-naphthyl)-methyl)-anthracene and 4-((2-naphthyl)-methyl)-benzaldehyde. The LOPM is found to provide nontrivial insights into the nature of electronic energy localization that are not accessible using simple density difference analysis.

  13. A method for partitioning cadmium bioaccumulated in small aquatic organisms

    Energy Technology Data Exchange (ETDEWEB)

    Siriwardena, S.N.; Rana, K.J.; Baird, D.J. [Univ. of Stirling (United Kingdom). Institute of Aquaculture

    1995-09-01

    A series of laboratory experiments was conducted to evaluate bioaccumulation and surface adsorption of aqueous cadmium (Cd) by sac-fry of the African tilapia Oreochromis niloticus. In the first experiment, the design consisted of two cadmium treatments: 15 {micro}g Cd{center_dot}L{sup {minus}1} in dilution water and a Cd-ethylenediaminetetraacetic acid (Cd-EDTA) complex at 15 {micro}m{center_dot}L{sup {minus}1}, and a water-only control. There were five replicates per treatment and 40 fish per replicate. It was found that EDTA significantly reduced the bioaccumulation of cadmium by tilapia sac-fry by 34%. Based on the results, a second experiment was conducted to evaluate four procedures: a no-rinse control; rinsing in EDTA; rinsing in distilled water; and rinsing in 5% nitric acid, for removing surface-bound Cd from exposed sac-fry. In this experiment, 30 fish in each of five replicates were exposed to 15 {micro}g Cd{center_dot}L{sup {minus}1} for 72 h, processed through the rinse procedures, and analyzed for total Cd. The EDTA rinse treatment significantly reduced (p<0.05) Cd concentrations of the exposed fish relative to those receiving no rinse. It was concluded that the EDTA rinse technique may be useful in studies evaluating the partitioning of surface-bound and accumulated cadmium in small aquatic organisms.

  14. Finding the Most Distant Quasars Using Bayesian Selection Methods

    CERN Document Server

    Mortlock, Daniel

    2014-01-01

    Quasars, the brightly glowing disks of material that can form around the super-massive black holes at the centres of large galaxies, are amongst the most luminous astronomical objects known and so can be seen at great distances. The most distant known quasars are seen as they were when the Universe was less than a billion years old (i.e., $\\sim\\!7%$ of its current age). Such distant quasars are, however, very rare, and so are difficult to distinguish from the billions of other comparably-bright sources in the night sky. In searching for the most distant quasars in a recent astronomical sky survey (the UKIRT Infrared Deep Sky Survey, UKIDSS), there were $\\sim\\!10^3$ apparently plausible candidates for each expected quasar, far too many to reobserve with other telescopes. The solution to this problem was to apply Bayesian model comparison, making models of the quasar population and the dominant contaminating population (Galactic stars) to utilise the information content in the survey measurements. The result wa...

  15. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.

    Science.gov (United States)

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the

  16. Bayesian and Frequentist Methods for Estimating Joint Uncertainty of Freundlich Adsorption Isotherm Fitting Parameters

    Science.gov (United States)

    In this paper, we present methods for estimating Freundlich isotherm fitting parameters (K and N) and their joint uncertainty, which have been implemented into the freeware software platforms R and WinBUGS. These estimates were determined by both Frequentist and Bayesian analyse...

  17. Landslide hazards mapping using uncertain Naïve Bayesian classification method

    Institute of Scientific and Technical Information of China (English)

    毛伊敏; 张茂省; 王根龙; 孙萍萍

    2015-01-01

    Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naïve Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naïve Bayesian classification method in Baota district of Yan’an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naïve Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves (AUC) in NBU and Naïve Bayesian algorithm are 87.29%and 82.47%respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.

  18. OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES

    Institute of Scientific and Technical Information of China (English)

    Yun-qing Huang; Wei Li; Fang Su

    2006-01-01

    In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.

  19. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  20. Comparasion of prediction and measurement methods for sound insulation of lightweight partitions

    Directory of Open Access Journals (Sweden)

    Praščević Momir

    2012-01-01

    Full Text Available It is important to know the sound insulation of partitions in order to be able to compare different constructions, calculate acoustic comfort in apartments or noise levels from outdoor sources such as road traffic, and find engineer optimum solutions to noise problems. The use of lightweight partitions as party walls between dwellings has become common because sound insulation requirements can be achieved with low overall surface weights. However, they need greater skill to design and construct, because the overall design is much more complex. It is also more difficult to predict and measure of sound transmission loss of lightweight partitions. There are various methods for predicting and measuring sound insulation of partitions and some of them will be described in this paper. Also, this paper presents a comparison of experimental results of the sound insulation of lightweight partitions with results obtained using different theoretical models for single homogenous panels and double panels with and without acoustic absorption in the cavity between the panels. [Projekat Ministarstva nauke Republike Srbije, br. TR-37020: Development of methodology and means for noise protection from urban areas i br. III-43014: Improvement of the monitoring system and the assessment of a long-term population exposure to pollutant substances in the environment using neural networks

  1. An Indoor Space Partition Method and its Fingerprint Positioning Optimization Considering Pedestrian Accessibility

    Science.gov (United States)

    Xu, Yue; Shi, Yong; Zheng, Xingyu; Long, Yi

    2016-06-01

    Fingerprint positioning method is generally the first choice in indoor navigation system due to its high accuracy and low cost. The accuracy depends on partition density to the indoor space. The accuracy will be higher with higher grid resolution. But the high grid resolution leads to significantly increasing work of the fingerprint data collection, processing and maintenance. This also might decrease the performance, portability and robustness of the navigation system. Meanwhile, traditional fingerprint positioning method use equational grid to partition the indoor space. While used for pedestrian navigation, sometimes a person can be located at the area where he or she cannot access. This paper studied these two issues, proposed a new indoor space partition method considering pedestrian accessibility, which can increase the accuracy of pedestrian position, and decrease the volume of the fingerprint data. Based on this proposed partition method, an optimized algorithm for fingerprint position was also designed. A across linker structure was used for fingerprint point index and matching. Experiment based on the proposed method and algorithm showed that the workload of fingerprint collection and maintenance were effectively decreased, and poisoning efficiency and accuracy was effectively increased

  2. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    Science.gov (United States)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  3. The application of 'strain range partitioning method' to torsional creep-fatigue interaction

    Science.gov (United States)

    Zamrik, S. Y.; Bilir, O. G.

    1975-01-01

    The method of strain range partitioning was applied to a series of torsional fatigue tests conducted on tubular 304 stainless steel specimens at 1200 F (649 C). Creep strain was superimposed on cycling strain, and the resulting strain range was partitioned into four components; completely reversed plastic shear strain followed by creep strain, creep strain followed by plastic strain and completely reversed creep strain. Each strain component was related to the cyclic life of the material. The paper describes the experimental procedure used to achieve strain partitioning and the torsional test results are compared to those obtained from axial tests. The damaging effects of the individual strain components were expressed by a linear life fraction rule. The shear strain plastic component showed the least detrimental factor when compared to creep strain reversal by plastic strain. In the latter case, a reduction of torsional fatigue life in the order of magnitude of 1.5 was observed.

  4. The Application of Strainrange Partitioning Method to Multiaxial Creep-Fatigue Interaction

    Science.gov (United States)

    Zamrik, S. Y.

    1978-01-01

    The method of strain range partitioning was applied to a series of torsional fatigue tests conducted on tubular 304 stainless steel specimens at 1200 F (649 C). Creep strain was superimposed on cycling strain, and the resulting strain range was partitioned into four components: (1) completely reversed plastic shear strain, (2) plastic shear strain followed by creep strain, (3) creep strain followed by plastic strain, and (4) completely reversed creep strain. Each strain component was related to the cyclic life of the material. The experimental procedure used to achieve strain partitioning is described, and the torsional test results are compared to those obtained from axial tests. The damaging effects of the individual strain components were expressed by a linear life fraction rule.

  5. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    a certain time period, the LP relaxation of the set partitioning model is solved with column generation. If a feasible solution is not found, further drivers are gradually added to the problem or the optimization time period is increased. Fractions are resolved with a constraint branching strategy using......The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP...... is formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...

  6. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  7. A Fully Bayesian Method for Jointly Fitting Instrumental Calibration and X-Ray Spectral Models

    Science.gov (United States)

    Xu, Jin; van Dyk, David A.; Kashyap, Vinay L.; Siemiginowska, Aneta; Connors, Alanna; Drake, Jeremy; Meng, Xiao-Li; Ratzlaff, Pete; Yu, Yaming

    2014-10-01

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is "pragmatic" in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  8. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jin; Yu, Yaming [Department of Statistics, University of California, Irvine, Irvine, CA 92697-1250 (United States); Van Dyk, David A. [Statistics Section, Imperial College London, Huxley Building, South Kensington Campus, London SW7 2AZ (United Kingdom); Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Connors, Alanna; Meng, Xiao-Li, E-mail: jinx@uci.edu, E-mail: yamingy@ics.uci.edu, E-mail: dvandyk@imperial.ac.uk, E-mail: vkashyap@cfa.harvard.edu, E-mail: asiemiginowska@cfa.harvard.edu, E-mail: jdrake@cfa.harvard.edu, E-mail: pratzlaff@cfa.harvard.edu, E-mail: meng@stat.harvard.edu [Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138 (United States)

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  9. Surveillance system and method having an operating mode partitioned fault classification model

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  10. A Family of Trigonometrically-fitted Partitioned Runge-Kutta Symplectic Methods

    International Nuclear Information System (INIS)

    We are presenting a family of trigonometrically fitted partitioned Runge-Kutta symplectic methods of fourth order with six stages. The solution of the one dimensional time independent Schroedinger equation is considered by trigonometrically fitted symplectic integrators. The Schroedinger equation is first transformed into a Hamiltonian canonical equation. Numerical results are obtained for the one-dimensional harmonic oscillator and the exponential potential

  11. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  12. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2016-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  13. A Bayesian hybrid method for context-sensitive spelling correction

    CERN Document Server

    Golding, A R

    1996-01-01

    Two classes of methods have been shown to be useful for resolving lexical ambiguity. The first relies on the presence of particular words within some distance of the ambiguous target word; the second uses the pattern of words and part-of-speech tags around the target word. These methods have complementary coverage: the former captures the lexical ``atmosphere'' (discourse topic, tense, etc.), while the latter captures local syntax. Yarowsky has exploited this complementarity by combining the two methods using decision lists. The idea is to pool the evidence provided by the component methods, and to then solve a target problem by applying the single strongest piece of evidence, whatever type it happens to be. This paper takes Yarowsky's work as a starting point, applying decision lists to the problem of context-sensitive spelling correction. Decision lists are found, by and large, to outperform either component method. However, it is found that further improvements can be obtained by taking into account not ju...

  14. The evaluation of the equilibrium partitioning method using sensitivity distributions of species in water and soil or sediment

    NARCIS (Netherlands)

    Beelen P van; Verbruggen EMJ; Peijnenburg WJGM; ECO

    2002-01-01

    The equilibrium partitioning method (EqP-method) can be used to derive environmental quality standards (like the Maximum Permissible Concentration or the intervention value) for soil or sediment, from aquatic toxicity data and a soil/water or sediment/water partitioning coefficient. The validity of

  15. Discovering Emergent Behaviors from Tracks Using Hierarchical Non-parametric Bayesian Methods

    OpenAIRE

    Chiron, Guillaume; Gomez-Krämer, Petra; Ménard, Michel

    2014-01-01

    International audience In video-surveillance, non-parametric Bayesian approaches based on a Hierarchical Dirichlet Process (HDP) have recently shown their efficiency for modeling crowed scene activities. This paper follows this track by proposing a method for detecting and clustering emergent behaviors across different captures made of numerous unconstrained trajectories. Most HDP applications for crowed scenes (e.g. traffic, pedestrians) are based on flow motion features. In contrast, we ...

  16. Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs

    OpenAIRE

    Jeffrey O. Oseh (M.Sc.); Olugbenga A. Falode (Ph.D)

    2015-01-01

    Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Baye...

  17. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    OpenAIRE

    Shuai Zhang; Chengyu Xi; Yan Wang; Wenyu Zhang; Yanhong Chen

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M re...

  18. Bayesian methods for multivariate modeling of pleiotropic SNP associations and genetic risk prediction

    Directory of Open Access Journals (Sweden)

    Stephen W Hartley

    2012-09-01

    Full Text Available Genome-wide association studies (GWAS have identified numerous associations between genetic loci and individual phenotypes; however, relatively few GWAS have attempted to detect pleiotropic associations, in which loci are simultaneously associated with multiple distinct phenotypes. We show that pleiotropic associations can be directly modeled via the construction of simple Bayesian networks, and that these models can be applied to produce single or ensembles of Bayesian classifiers that leverage pleiotropy to improve genetic risk prediction.The proposed method includes two phases: (1 Bayesian model comparison, to identify SNPs associated with one or more traits; and (2 cross validation feature selection, in which a final set of SNPs is selected to optimize prediction.To demonstrate the capabilities and limitations of the method, a total of 1600 case-control GWAS datasets with 2 dichotomous phenotypes were simulated under 16 scenarios, varying the association strengths of causal SNPs, the size of the discovery sets, the balance between cases and controls, and the number of pleiotropic causal SNPs.Across the 16 scenarios, prediction accuracy varied from 90% to 50%. In the 14 scenarios that included pleiotropically-associated SNPs, the pleiotropic model search and prediction methods consistently outperformed the naive model search and prediction. In the 2 scenarios in which there were no true pleiotropic SNPs, the differences between the pleiotropic and naive model searches were minimal.

  19. Liver segmentation in MRI: a fully automatic method based on stochastic partitions

    OpenAIRE

    López-Mir, Fernando; Naranjo Ornedo, Valeriana; Angulo, J.; Alcañiz Raya, Mariano Luis; Luna, L.

    2014-01-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marke...

  20. Bayesian approach to color-difference models based on threshold and constant-stimuli methods.

    Science.gov (United States)

    Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo

    2015-06-15

    An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation. PMID:26193510

  1. A generalized bayesian inference method for constraining the interiors of super Earths and sub-Neptunes

    CERN Document Server

    Dorn, C; Khan, A; Heng, K; Alibert, Y; Helled, R; Rivoldini, A; Benz, W

    2016-01-01

    We aim to present a generalized Bayesian inference method for constraining interiors of super Earths and sub-Neptunes. Our methodology succeeds in quantifying the degeneracy and correlation of structural parameters for high dimensional parameter spaces. Specifically, we identify what constraints can be placed on composition and thickness of core, mantle, ice, ocean, and atmospheric layers given observations of mass, radius, and bulk refractory abundance constraints (Fe, Mg, Si) from observations of the host star's photospheric composition. We employed a full probabilistic Bayesian inference analysis that formally accounts for observational and model uncertainties. Using a Markov chain Monte Carlo technique, we computed joint and marginal posterior probability distributions for all structural parameters of interest. We included state-of-the-art structural models based on self-consistent thermodynamics of core, mantle, high-pressure ice, and liquid water. Furthermore, we tested and compared two different atmosp...

  2. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2013-01-01

    Full Text Available Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP to search for the optimal procurement scheme (OPS. Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services’ attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  3. A new method for E-government procurement using collaborative filtering and Bayesian approach.

    Science.gov (United States)

    Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach. PMID:24385869

  4. Comparison of Automated Continuous Flow Method With Shake- Flask Method in Determining Partition Coefficients of Bidentate Hydroxypyridinone Ligands

    Directory of Open Access Journals (Sweden)

    Lotfollah Saghaie

    2003-08-01

    Full Text Available The partition coefficients (Kpart , in octanol/water system of a range of bidentate ligands containing the 3-hydroxypyridin-4-one moiety were determined using shake flask and automated continuous flow methods (filter probe method. The shake flask method was used for extremely hydrophilic or hydrophobic compounds with a Kpart values greater than 100 and less than 0.01. For other ligands which possess moderate lipophilicity (Kpart values between 0.01-100 the filter probe method was used. Also the partition coefficient of four ligands with moderate lipophilicity was determined by shake flask method in order to check comparability of these two methods. While the shake flask method was able to determine either extremely hydrophilic or hydrophobic compounds efficiently, the filter probe method was unable to measure such Kpart values. Although, determination of the Kpart values of all compounds is possible with the classical shake-flask method, the procedure is time consuming. In contrast, the filter probe method offers many advantages over the traditional shake-flask method in terms of speed, efficiency of separation and degree of automation. The shake-flask method is the method of choice for determination of partition coefficients of extremely hydrophilic and hydrophobic ligands.

  5. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  6. Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

    CERN Document Server

    Dimitrakakis, Christos

    2009-01-01

    There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such algorithms and examines their complexity in this setting.

  7. Bayesian inference for data assimilation using Least-Squares Finite Element methods

    International Nuclear Information System (INIS)

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

  8. The Application of Strain Range Partitioning Method to Torsional Creep-Fatigue Interaction

    Science.gov (United States)

    Zamrik, S. Y.

    1975-01-01

    The method of strain range partitioning was applied to a series of torsional fatigue tests conducted on tubular 304 stainless steel specimens at 1200 F. Creep strain was superimposed on cycling strain, and the resulting strain range was partitioned into four components; completely reversed plastic shear strain, plastic shear strain followed by creep strain, creep strain followed by plastic strain and completely reversed creep strain. Each strain component was related to the cyclic life of the material. The damaging effects of the individual strain components were expressed by a linear life fraction rule. The plastic shear strain component showed the least detrimental factor when compared to creep strain reversed by plastic strain. In the latter case, a reduction of torsional fatigue life in the order of magnitude of 1.5 was observed.

  9. Comparison of Two Partitioning Methods in a Fuzzy Time Series Model for Composite Index Forecasting

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah,

    2011-04-01

    Full Text Available Study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling vague and incomplete data. A variety of forecasting models have devoted to improve forecasting accuracy. Recently, Fuzzy time-series based on Fibonacci sequence has been proposed as a new fuzzy time series model whichincorporates the concept of the Fibonacci sequence, the framework of basic fuzzy time series model and the weighted method. However, the issue on lengths of intervals has not been investigated by the highly acclaimed model despite already affirmed that length of intervals could affects forecasting results. Therefore the purpose of this paper is to propose two methods of defining interval lengths into fuzzy time-series based on Fibonacci sequence model and compare their performances. Frequency density-based partitioning and randomly chosen lengths of interval partitioning were tested into fuzzy time-series based on Fibonacci sequence model using stock index data and compared their performances. A two-year weekly period of Kuala Lumpur Composite Index stock index data was employed as experimental data sets. The results show that the frequency density based partitioning outperforms the randomly chosen length of interval. This result reaffirms the importance of defining the appropriate interval lengths in fuzzy time series forecasting performances.

  10. Finding roots of arbitrary high order polynomials based on neural network recursive partitioning method

    Institute of Scientific and Technical Information of China (English)

    HUANG Deshuang; CHI Zheru

    2004-01-01

    This paper proposes a novel recursive partitioning method based on constrained learning neural networks to find an arbitrary number (less than the order of the polynomial) of (real or complex) roots of arbitrary polynomials. Moreover, this paper also gives a BP network constrained learning algorithm (CLA) used in root-finders based on the constrained relations between the roots and the coefficients of polynomials. At the same time, an adaptive selection method for the parameter δPwith the CLA is also given.The experimental results demonstrate that this method can more rapidly and effectively obtain the roots of arbitrary high order polynomials with higher precision than traditional root-finding approaches.

  11. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers

    Science.gov (United States)

    Reginatto, Marcel; Zimbal, Andreas

    2008-02-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.

  12. Bayesian methods for uncertainty factor application for derivation of reference values.

    Science.gov (United States)

    Simon, Ted W; Zhu, Yiliang; Dourson, Michael L; Beck, Nancy B

    2016-10-01

    In 2014, the National Research Council (NRC) published Review of EPA's Integrated Risk Information System (IRIS) Process that considers methods EPA uses for developing toxicity criteria for non-carcinogens. These criteria are the Reference Dose (RfD) for oral exposure and Reference Concentration (RfC) for inhalation exposure. The NRC Review suggested using Bayesian methods for application of uncertainty factors (UFs) to adjust the point of departure dose or concentration to a level considered to be without adverse effects for the human population. The NRC foresaw Bayesian methods would be potentially useful for combining toxicity data from disparate sources-high throughput assays, animal testing, and observational epidemiology. UFs represent five distinct areas for which both adjustment and consideration of uncertainty may be needed. NRC suggested UFs could be represented as Bayesian prior distributions, illustrated the use of a log-normal distribution to represent the composite UF, and combined this distribution with a log-normal distribution representing uncertainty in the point of departure (POD) to reflect the overall uncertainty. Here, we explore these suggestions and present a refinement of the methodology suggested by NRC that considers each individual UF as a distribution. From an examination of 24 evaluations from EPA's IRIS program, when individual UFs were represented using this approach, the geometric mean fold change in the value of the RfD or RfC increased from 3 to over 30, depending on the number of individual UFs used and the sophistication of the assessment. We present example calculations and recommendations for implementing the refined NRC methodology.

  13. Multi-Sensor Fusion Method using Dynamic Bayesian Network for Precise Vehicle Localization and Road Matching

    CERN Document Server

    Smaili, Cherif; Charpillet, François

    2007-01-01

    This paper presents a multi-sensor fusion strategy for a novel road-matching method designed to support real-time navigational features within advanced driving-assistance systems. Managing multihypotheses is a useful strategy for the road-matching problem. The multi-sensor fusion and multi-modal estimation are realized using Dynamical Bayesian Network. Experimental results, using data from Antilock Braking System (ABS) sensors, a differential Global Positioning System (GPS) receiver and an accurate digital roadmap, illustrate the performances of this approach, especially in ambiguous situations.

  14. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed...... to utilize a small number of spatially clustered sets of voxels that are particularly suited for clinical interpretation. RVoxM automatically tunes all its free parameters during the training phase, and offers the additional advantage of producing probabilistic prediction outcomes. Experiments on age...

  15. Study On Method For Simulation Of Partitioning Tracers In Double Porosity Model Of Fractured Basement Formations

    International Nuclear Information System (INIS)

    Single well tracer test (SWTT) has been widely used and accepted as a standard method for residual oil saturation (SOR) measurement in the field. The test involves injecting of the partitioning tracers into the reservoir, producing them back and matching their profiles using a suitable simulation program. Most of simulation programs were first developed for sandstone reservoir using single porosity model cannot be applied for highly heterogeneous reservoirs such as fractured basement and carbonate reservoirs. Therefore a simulation code in double porosity model is needed to simulate tracer flow in our fractured basement reservoirs. In this project, a finite-difference simulation code has been developed by following the Tang mathematical model to simulate the partitioning tracers in double porosity medium. The code was matched with several field tracer data and compare with results of the University of Texas chemical simulator showing an acceptable agreement between our program and the famous UTChem simulator. Besides, several experiments were conducted to measure residual oil saturation in 1D column and a 2D sandpad model. Results of the experiments show that the partitioning tracers can measure residual oil saturation in glass bead models with a relatively high accuracy when the flow velocity of tracer is sufficiently low. (author)

  16. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    Science.gov (United States)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  17. A Study of New Method for Weapon System Effectiveness Evaluation Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    YAN Dai-wei; GU Liang-xian; PAN Lei

    2008-01-01

    As weapon system effectiveness is affected by many factors, its evaluation is essentially a multi-criterion decision making problem for its complexity. The evaluation model of the effectiveness is established on the basis of metrics architecture of the effectiveness. The Bayesian network, which is used to evaluate the effectiveness, is established based on the metrics architecture and the evaluation models. For getting the weights of the metrics by Bayesian network, subjective initial values of the weights are given, gradient ascent algorithm is adopted, and the reasonable values of the weights are achieved. And then the effectiveness of every weapon system project is gained. The weapon system, whose effectiveness is relative maximum, is the optimization system. The research result shows that this method can solve the problem of AHP method which evaluation results are not compatible to the practice results and overcome the shortcoming of neural network in multilayer and multi-criterion decision. The method offers a new approaeh for evaluating the effectiveness.

  18. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Nikhil V. Bhagwat

    2005-12-17

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  19. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    Science.gov (United States)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  20. Overview of methods of reverse engineering of gene regulatory networks: Boolean and Bayesian networks

    Directory of Open Access Journals (Sweden)

    Frolova A. O.

    2012-06-01

    Full Text Available Reverse engineering of gene regulatory networks is an intensively studied topic in Systems Biology as it reconstructs regulatory interactions between all genes in the genome in the most complete form. The extreme computational complexity of this problem and lack of thorough reviews on reconstruction methods of gene regulatory network is a significant obstacle to further development of this area. In this article the two most common methods for modeling gene regulatory networks are surveyed: Boolean and Bayesian networks. The mathematical description of each method is given, as well as several algorithmic approaches to modeling gene networks using these methods; the complexity of algorithms and the problems that arise during its implementation are also noted.

  1. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  2. 40 CFR 799.6756 - TSCA partition coefficient (n-octanol/water), generator column method.

    Science.gov (United States)

    2010-07-01

    ... biphenyls. Journal of Chemical and Engineering Data 29:184-190 (1984). (14) Neely, W.B. et al. Partition... partition coefficients of organic compounds at 25 °C. Journal of Chemical and Engineering Data...

  3. A Bayesian method to incorporate hundreds of functional characteristics with association evidence to improve variant prioritization.

    Directory of Open Access Journals (Sweden)

    Sarah A Gagliano

    Full Text Available The increasing quantity and quality of functional genomic information motivate the assessment and integration of these data with association data, including data originating from genome-wide association studies (GWAS. We used previously described GWAS signals ("hits" to train a regularized logistic model in order to predict SNP causality on the basis of a large multivariate functional dataset. We show how this model can be used to derive Bayes factors for integrating functional and association data into a combined Bayesian analysis. Functional characteristics were obtained from the Encyclopedia of DNA Elements (ENCODE, from published expression quantitative trait loci (eQTL, and from other sources of genome-wide characteristics. We trained the model using all GWAS signals combined, and also using phenotype specific signals for autoimmune, brain-related, cancer, and cardiovascular disorders. The non-phenotype specific and the autoimmune GWAS signals gave the most reliable results. We found SNPs with higher probabilities of causality from functional characteristics showed an enrichment of more significant p-values compared to all GWAS SNPs in three large GWAS studies of complex traits. We investigated the ability of our Bayesian method to improve the identification of true causal signals in a psoriasis GWAS dataset and found that combining functional data with association data improves the ability to prioritise novel hits. We used the predictions from the penalized logistic regression model to calculate Bayes factors relating to functional characteristics and supply these online alongside resources to integrate these data with association data.

  4. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Science.gov (United States)

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  5. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Directory of Open Access Journals (Sweden)

    Nazia Afreen

    2016-03-01

    Full Text Available Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  6. Partition method for impact dynamics of flexible multibody systems based on contact constraint

    Institute of Scientific and Technical Information of China (English)

    段玥晨; 章定国; 洪嘉振

    2013-01-01

    The impact dynamics of a flexible multibody system is investigated. By using a partition method, the system is divided into two parts, the local impact region and the region away from the impact. The two parts are connected by specific boundary conditions, and the system after partition is equivalent to the original system. According to the rigid-flexible coupling dynamic theory of multibody system, system’s rigid-flexible coupling dynamic equations without impact are derived. A local impulse method for establishing the initial impact conditions is proposed. It satisfies the compatibility con-ditions for contact constraints and the actual physical situation of the impact process of flexible bodies. Based on the contact constraint method, system’s impact dynamic equa-tions are derived in a differential-algebraic form. The contact/separation criterion and the algorithm are given. An impact dynamic simulation is given. The results show that system’s dynamic behaviors including the energy, the deformations, the displacements, and the impact force during the impact process change dramatically. The impact makes great effects on the global dynamics of the system during and after impact.

  7. Comparison of prediction methods for octanol-air partition coefficients of diverse organic compounds.

    Science.gov (United States)

    Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying

    2016-04-01

    The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. PMID:26802270

  8. Bayesian functional integral method for inferring continuous data from discrete measurements.

    Science.gov (United States)

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-01

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. PMID:22325261

  9. Spectral energy distribution modelling of Southern candidate massive protostars using the Bayesian inference method

    CERN Document Server

    Hill, T; Minier, V; Burton, M G; Cunningham, M R

    2008-01-01

    Concatenating data from the millimetre regime to the infrared, we have performed spectral energy distribution modelling for 227 of the 405 millimetre continuum sources of Hill et al. (2005) which are thought to contain young massive stars in the earliest stages of their formation. Three main parameters are extracted from the fits: temperature, mass and luminosity. The method employed was Bayesian inference, which allows a statistically probable range of suitable values for each parameter to be drawn for each individual protostellar candidate. This is the first application of this method to massive star formation. The cumulative distribution plots of the SED modelled parameters in this work indicate that collectively, the sources without methanol maser and/or radio continuum associations (MM-only cores) display similar characteristics to those of high mass star formation regions. Attributing significance to the marginal distinctions between the MM-only cores and the high-mass star formation sample we draw hypo...

  10. An urban flood risk assessment method using the Bayesian Network approach

    DEFF Research Database (Denmark)

    Åström, Helena Lisa Alexandra

    the Bayesian Network (BN) approach is developed, and the method is exemplified in an urban catchment. BNs have become an increasingly popular method for describing complex systems and aiding decision-making under uncertainty. In environmental management, BNs have mainly been utilized in ecological assessments...... decision and utility nodes, which are beneficial in decision-making problems. This thesis aims at addressing four specific challenges identified in FRA and showing how these challenges may be addressed using an ID. Firstly, this thesis presents how an ID can be utilized to describe the temporal dimension...... are connected with each other by connecting the adaptation nodes in the time slices. Secondly, this thesis recognizes the need for including a spatial dimension in FRA. An urban catchment is rarely homogenous, and there are areas that have a higher risk than others. From a decision-making point of view...

  11. Development of partitioning method. Back-extraction of uranium from DIDPA solvent

    International Nuclear Information System (INIS)

    A partitioning method has been developed under the concepts of separation of elements in high level liquid waste generated from nuclear fuel reprocessing according to their half lives and radiological toxicity and of disposal of them by suitable methods. In the partitioning process developed in JAERI solvent, extraction with DIDPA (di-isodecyl phosphoric acid) was adopted for actinide separation. The present paper describes the results of study on back-extraction of hexavalent uranium from DIDPA. Most experiments were carried out to select a suitable reagent for back-extraction of U (VI) extracted from 0.5M nitric acid with DIDPA. The experimental results show that distribution ratios of U (VI) is less than 0.1 in the back-extractions with 1.5M sodium carbonate-15 vol% alcohol or 20wt% hydrazine carbonate-10 vol% alcohol. Uranium in the sodium carbonate solution were recovered by anion-exchange with strong-base resins and eluted by NH4NO3 and other reagents. The results of the present study confirm the validity of the DIDPA extraction process; U, Pu, Np, Am and Cm in HLW are extracted simultaneously with DIDPA, and they are recovered from DIDPA with various reagent: nitric acid for Am and Cm, oxalic acid for Np and Pu, and sodium carbonate or hydrazine carbonate for U. (author)

  12. Differences between fully Bayesian and pragmatic methods to assess predictive uncertainty and optimal monitoring designs

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Gosses, Moritz; Nowak, Wolfgang

    2015-04-01

    Data acquisition for monitoring the state in different compartments of complex, coupled environmental systems is often time consuming and expensive. Therefore, experimental monitoring strategies are ideally designed such that most can be learned about the system at minimal costs. Bayesian methods for uncertainty quantification and optimal design (OD) of monitoring strategies are well suited to handle the non-linearity exhibited by most coupled environmental systems. However, their high computational demand restricts their applicability to models with comparatively low run-times. Therefore, pragmatic approaches have been used predominantly in the past where data worth and OD analyses have been restricted to linear or linearised problems and methods. Bayesian (nonlinear) and pragmatic (linear) OD approaches are founded on different assumptions and typically follow different steps in the modelling chain of 1) model calibration, 2) uncertainty quantification, and 3) optimal design analysis. The goal of this study is to follow through these steps for a Bayesian and a pragmatic approach and to discuss the impact of different assumptions (prior uncertainty), calibration strategies, and OD analysis methods on the proposed monitoring designs and their reliability to reduce predictive uncertainty. The OD framework PreDIA (Leube et al. 2012) is used for the nonlinear assessment with a conditional model ensemble obtained with Markov-chain Monte Carlo simulation representing the initial predictive uncertainty. PreDIA can consider any kind of uncertainties and non-linear (statistical) dependencies in data, models, parameters and system drivers during the OD process. In the pragmatic OD approach, the parameter calibration was performed with a non-linear global search and the initial predictive uncertainty was estimated using the PREDUNC utility (Moore and Doherty 2005) of PEST. PREDUNC was also used for the linear OD analysis. We applied PreDIA and PREDUNC for uncertainty

  13. From "weight of evidence" to quantitative data integration using multicriteria decision analysis and Bayesian methods.

    Science.gov (United States)

    Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas

    2015-01-01

    "Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis.

  14. From "weight of evidence" to quantitative data integration using multicriteria decision analysis and Bayesian methods.

    Science.gov (United States)

    Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas

    2015-01-01

    "Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis. PMID:25592482

  15. Prediction of imipramine serum levels in enuretic children by a Bayesian method: comparison with two other conventional dosing methods.

    Science.gov (United States)

    Fernández de Gatta, M M; Tamayo, M; García, M J; Amador, D; Rey, F; Gutiérrez, J R; Domínguez-Gil Hurlé, A

    1989-11-01

    The aim of the present study was to characterize the kinetic behavior of imipramine (IMI) and desipramine in enuretic children and to evaluate the performance of different methods for dosage prediction based on individual and/or population data. The study was carried out in 135 enuretic children (93 boys) ranging in age between 5 and 13 years undergoing treatment with IMI in variable single doses (25-75 mg/day) administered at night. Sampling time was one-half the dosage interval at steady state. The number of data available for each patient varied (1-4) and was essentially limited by clinical criteria. Pharmacokinetic calculations were performed using a simple proportional relationship (method 1) and a multiple nonlinear regression program (MULTI 2 BAYES) with two different options: using the ordinary least-squares method (method 2) and the least-squares method based on the Bayesian algorithm (method 3). The results obtained point to a coefficient of variation for the level/dose ratio of the drug (58%) that is significantly lower than that of the metabolite (101.4%). The forecasting capacity of method 1 is deficient both regarding accuracy [mean prediction error (MPE) = -5.48 +/- 69.15] and precision (root mean squared error = 46.42 +/- 51.39). The standard deviation of the MPE (69) makes the method unacceptable from the clinical point of view. The more information that is available concerning the serum levels, the greater are the accuracy and precision of methods (2 and 3). With the Bayesian method, less information on drug serum levels is needed to achieve clinically acceptable predictions. PMID:2595743

  16. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    2010-01-01

    of multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable......Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context....... This can be implemented using a Bayesian approach, which is next extended to include multiple genetic markers. We then propose a hierarchical model for undertaking a meta-analysis of multiple studies, in which it is not necessary that the same genetic markers are measured in each study. This provides...

  17. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  18. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  19. Bayesian analysis of general failure data from an ageing distribution: advances in numerical methods

    Energy Technology Data Exchange (ETDEWEB)

    Procaccia, H.; Villain, B. [Electricite de France (EDF), 93 - Saint-Denis (France); Clarotti, C.A. [ENEA, Casaccia (Italy)

    1996-12-31

    EDF and ENEA carried out a joint research program for developing the numerical methods and computer codes needed for Bayesian analysis of component-lives in the case of ageing. Early results of this study were presented at ESREL`94. Since then the following further steps have been gone: input data have been generalized to the case that observed lives are censored both on the right and on the left; allowable life distributions are Weibull and gamma - their parameters are both unknown and can be statistically dependent; allowable priors are histograms relative to different parametrizations of the life distribution of concern; first-and-second-order-moments of the posterior distributions can be computed. In particular the covariance will give some important information about the degree of the statistical dependence between the parameters of interest. An application of the code to the appearance of a stress corrosion cracking in a tube of the PWR Steam Generator system is presented. (authors). 10 refs.

  20. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  1. Efficient Methods for Bayesian Uncertainty Analysis and Global Optimization of Computationally Expensive Environmental Models

    Science.gov (United States)

    Shoemaker, Christine; Espinet, Antoine; Pang, Min

    2015-04-01

    Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.

  2. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  3. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Science.gov (United States)

    Zhang, Kai; Wang, Zengfei; Zhang, Liming; Yao, Jun; Yan, Xia

    2015-01-01

    In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  4. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    Full Text Available In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method, for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  5. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  6. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    Science.gov (United States)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  7. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    2010-01-01

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Based on data from the Danish passenger railway operator DSB S-tog A/S, a solution method to the train driver recovery problem (TDRP) is developed. The TDRP is formulated as a set...... partitioning problem. We define a disruption neighbourhood by identifying a small set of drivers and train tasks directly affected by the disruption. Based on the disruption neighbourhood, the TDRP model is formed and solved. If the TDRP solution provides a feasible recovery for the drivers within...... the disruption neighbourhood, we consider that the problem is solved. However, if a feasible solution is not found, the disruption neighbourhood is expanded by adding further drivers or increasing the recovery time period. Fractional solutions to the LP relaxation of the TDRP are resolved with a constraint...

  8. Surveillance system and method having parameter estimation and operating mode partitioning

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  9. Analytical Classification of Multimedia Index Structures by Using a Partitioning Method-Based Framework

    CERN Document Server

    keyvanpour, Mohammadreza

    2011-01-01

    Due to the advances in hardware technology and increase in production of multimedia data in many applications, during the last decades, multimedia databases have become increasingly important. Contentbased multimedia retrieval is one of an important research area in the field of multimedia databases. Lots of research on this field has led to proposition of different kinds of index structures to support fast and efficient similarity search to retrieve multimedia data from these databases. Due to variety and plenty of proposed index structures, we suggest a systematic framework based on partitioning method used in these structures to classify multimedia index structures, and then we evaluated these structures based on important functional measures. We hope this proposed framework will lead to empirical and technical comparison of multimedia index structures and development of more efficient structures at future.

  10. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    Science.gov (United States)

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. PMID:26708167

  11. Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases

    Institute of Scientific and Technical Information of China (English)

    郑建军; 刘玉树; 陈立潮

    2003-01-01

    The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.

  12. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  13. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems

    Science.gov (United States)

    Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark

    2016-03-01

    The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.

  14. Partition of unity finite element method for quantum mechanical materials calculations

    CERN Document Server

    Pask, John E

    2016-01-01

    The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences and finite-elements have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative to PW: excessive number of degrees of freedom needed to achieve the required accuracies. We present a real-space partition of unity finite element (PUFE) method to solve the Kohn-Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution proc...

  15. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  16. Using Bayesian methods to predict climate impacts on groundwater availability and agricultural production in Punjab, India

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.

    2015-12-01

    Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.

  17. Evaluating propagation method performance over time with Bayesian updating: an application to incubator testing

    Science.gov (United States)

    Converse, Sarah J.; Chandler, J. N.; Olsen, G.H.; Shafer, C. C.

    2010-01-01

    In captive-rearing programs, small sample sizes can limit the quality of information on performance of propagation methods. Bayesian updating can be used to increase information on method performance over time. We demonstrate an application to incubator testing at USGS Patuxent Wildlife Research Center. A new type of incubator was purchased for use in the whooping crane (Grus americana) propagation program, which produces birds for release. We tested the new incubator for reliability, using sandhill crane (Grus canadensis) eggs as surrogates. We determined that the new incubator should result in hatching rates no more than 5% lower than the available incubators, with 95% confidence, before it would be used to incubate whooping crane eggs. In 2007, 5 healthy chicks hatched from 12 eggs in the new incubator, and 2 hatched from 5 in an available incubator, for a median posterior difference of method, where a veterinarian determined whether eggs produced chicks that, at hatching, had no apparent health problems that would impede future release. We used the 2007 estimates as priors in the 2008 analysis. In 2008, 7 normal chicks hatched from 15 eggs in the new incubator, and 11 hatched from 15 in an available incubator, for a median posterior difference of 19%, with 95% credible interval (-8%, 44%). The increased sample size has increased our understanding of incubator performance. While additional data will be collected, at this time the new incubator does not appear adequate for use with whooping crane eggs.

  18. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method

    Science.gov (United States)

    Zeng, Xiankui; Wu, Jichun; Wang, Dong; Zhu, Xiaobin; Long, Yuqiao

    2016-07-01

    Because of groundwater conceptualization uncertainty, multi-model methods are usually used and the corresponding uncertainties are estimated by integrating Markov Chain Monte Carlo (MCMC) and Bayesian model averaging (BMA) methods. Generally, the variance method is used to measure the uncertainties of BMA prediction. The total variance of ensemble prediction is decomposed into within-model and between-model variances, which represent the uncertainties derived from parameter and conceptual model, respectively. However, the uncertainty of a probability distribution couldn't be comprehensively quantified by variance solely. A new measuring method based on information entropy theory is proposed in this study. Due to actual BMA process hard to meet the ideal mutually exclusive collectively exhaustive condition, BMA predictive uncertainty could be decomposed into parameter, conceptual model, and overlapped uncertainties, respectively. Overlapped uncertainty is induced by the combination of predictions from correlated model structures. In this paper, five simple analytical functions are firstly used to illustrate the feasibility of the variance and information entropy methods. A discrete distribution example shows that information entropy could be more appropriate to describe between-model uncertainty than variance. Two continuous distribution examples show that the two methods are consistent in measuring normal distribution, and information entropy is more appropriate to describe bimodal distribution than variance. The two examples of BMA uncertainty decomposition demonstrate that the two methods are relatively consistent in assessing the uncertainty of unimodal BMA prediction. Information entropy is more informative in describing the uncertainty decomposition of bimodal BMA prediction. Then, based on a synthetical groundwater model, the variance and information entropy methods are used to assess the BMA uncertainty of groundwater modeling. The uncertainty assessments of

  19. Post hoc Analysis for Detecting Individual Rare Variant Risk Associations Using Probit Regression Bayesian Variable Selection Methods in Case-Control Sequencing Studies.

    Science.gov (United States)

    Larson, Nicholas B; McDonnell, Shannon; Albright, Lisa Cannon; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham; MacInnis, Robert; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catolona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2016-09-01

    Rare variants (RVs) have been shown to be significant contributors to complex disease risk. By definition, these variants have very low minor allele frequencies and traditional single-marker methods for statistical analysis are underpowered for typical sequencing study sample sizes. Multimarker burden-type approaches attempt to identify aggregation of RVs across case-control status by analyzing relatively small partitions of the genome, such as genes. However, it is generally the case that the aggregative measure would be a mixture of causal and neutral variants, and these omnibus tests do not directly provide any indication of which RVs may be driving a given association. Recently, Bayesian variable selection approaches have been proposed to identify RV associations from a large set of RVs under consideration. Although these approaches have been shown to be powerful at detecting associations at the RV level, there are often computational limitations on the total quantity of RVs under consideration and compromises are necessary for large-scale application. Here, we propose a computationally efficient alternative formulation of this method using a probit regression approach specifically capable of simultaneously analyzing hundreds to thousands of RVs. We evaluate our approach to detect causal variation on simulated data and examine sensitivity and specificity in instances of high RV dimensionality as well as apply it to pathway-level RV analysis results from a prostate cancer (PC) risk case-control sequencing study. Finally, we discuss potential extensions and future directions of this work. PMID:27312771

  20. Bayesian Word Sense Induction

    OpenAIRE

    Brody, Samuel; Lapata, Mirella

    2009-01-01

    Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...

  1. Bias in diet determination: incorporating traditional methods in Bayesian mixing models.

    Directory of Open Access Journals (Sweden)

    Valentina Franco-Trecu

    Full Text Available There are not "universal methods" to determine diet composition of predators. Most traditional methods are biased because of their reliance on differential digestibility and the recovery of hard items. By relying on assimilated food, stable isotope and Bayesian mixing models (SIMMs resolve many biases of traditional methods. SIMMs can incorporate prior information (i.e. proportional diet composition that may improve the precision in the estimated dietary composition. However few studies have assessed the performance of traditional methods and SIMMs with and without informative priors to study the predators' diets. Here we compare the diet compositions of the South American fur seal and sea lions obtained by scats analysis and by SIMMs-UP (uninformative priors and assess whether informative priors (SIMMs-IP from the scat analysis improved the estimated diet composition compared to SIMMs-UP. According to the SIMM-UP, while pelagic species dominated the fur seal's diet the sea lion's did not have a clear dominance of any prey. In contrast, SIMM-IP's diets compositions were dominated by the same preys as in scat analyses. When prior information influenced SIMMs' estimates, incorporating informative priors improved the precision in the estimated diet composition at the risk of inducing biases in the estimates. If preys isotopic data allow discriminating preys' contributions to diets, informative priors should lead to more precise but unbiased estimated diet composition. Just as estimates of diet composition obtained from traditional methods are critically interpreted because of their biases, care must be exercised when interpreting diet composition obtained by SIMMs-IP. The best approach to obtain a near-complete view of predators' diet composition should involve the simultaneous consideration of different sources of partial evidence (traditional methods, SIMM-UP and SIMM-IP in the light of natural history of the predator species so as to reliably

  2. Assessment of Agricultural Water Management in Punjab, India using Bayesian Methods

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.; Sidhu, R.

    2013-12-01

    The success of the Green Revolution in Punjab, India is threatened by the declining water table (approx. 1 m/yr). Punjab, a major agricultural supplier for the rest of India, supports irrigation with a canal system and groundwater, which is vastly over-exploited. Groundwater development in many districts is greater than 200% the annual recharge rate. The hydrologic data required to complete a mass-balance model are not available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements. Using the known values of precipitation, total canal water delivery, crop yield, and water table elevation, we solve for each unknown parameter (often a coefficient) using a Markov chain Monte Carlo (MCMC) algorithm. Results provide regional estimates of irrigation requirements and groundwater recharge rates under observed climate conditions (1972 to 2002). Model results are used to estimate future water availability and demand to help inform agriculture management decisions under projected climate conditions. We find that changing cropping patterns for the region can maintain food production while balancing groundwater pumping with natural recharge. This computational method can be applied in data-scarce regions across the world, where agricultural water management is required to resolve competition between food security and changing resource availability.

  3. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  4. The Method of Oilfield Development Risk Forecasting and Early Warning Using Revised Bayesian Network

    Directory of Open Access Journals (Sweden)

    Yihua Zhong

    2016-01-01

    Full Text Available Oilfield development aiming at crude oil production is an extremely complex process, which involves many uncertain risk factors affecting oil output. Thus, risk prediction and early warning about oilfield development may insure operating and managing oilfields efficiently to meet the oil production plan of the country and sustainable development of oilfields. However, scholars and practitioners in the all world are seldom concerned with the risk problem of oilfield block development. The early warning index system of blocks development which includes the monitoring index and planning index was refined and formulated on the basis of researching and analyzing the theory of risk forecasting and early warning as well as the oilfield development. Based on the indexes of warning situation predicted by neural network, the method dividing the interval of warning degrees was presented by “3σ” rule; and a new method about forecasting and early warning of risk was proposed by introducing neural network to Bayesian networks. Case study shows that the results obtained in this paper are right and helpful to the management of oilfield development risk.

  5. Development of Bayesian-based transformation method of Landsat imagery into pseudo-hyperspectral imagery

    Science.gov (United States)

    Hoang, Nguyen Tien; Koike, Katsuaki

    2015-10-01

    It has been generally accepted that hyperspectral remote sensing is more effective and provides greater accuracy than multispectral remote sensing in many application fields. EO-1 Hyperion, a representative hyperspectral sensor, has much more spectral bands, while Landsat data has much wider image scene and longer continuous space-based record of Earth's land. This study aims to develop a new method, Pseudo-Hyperspectral Image Synthesis Algorithm (PHISA), to transform Landsat imagery into pseudo hyperspectral imagery using the correlation between Landsat and EO-1 Hyperion data. At first Hyperion scene was precisely pre-processed and co-registered to Landsat scene, and both data were corrected for atmospheric effects. Bayesian model averaging method (BMA) was applied to select the best model from a class of several possible models. Subsequently, this best model is utilized to calculate pseudo-hyperspectral data by R programming. Based on the selection results by BMA, we transform Landsat imagery into 155 bands of pseudo-hyperspectral imagery. Most models have multiple R-squared values higher than 90%, which assures high accuracy of the models. There are no significant differences visually between the pseudo- and original data. Most bands have Pearson's coefficients coefficients < 0.93 like outliers in the data sets. In a similar manner, most Root Mean Square Error values are considerably low, smaller than 0.014. These observations strongly support that the proposed PHISA is valid for transforming Landsat data into pseudo-hyperspectral data from the outlook of statistics.

  6. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  7. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  8. Bayesian Inference for Functional Dynamics Exploring in fMRI Data.

    Science.gov (United States)

    Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing

    2016-01-01

    This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  9. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    Science.gov (United States)

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  10. Estimation of octanol/water partition coefficient and aqueous solubility of environmental chemicals using molecular fingerprints and machine learning methods

    Science.gov (United States)

    Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

  11. Experimental Method Development for Estimating Solid-phase Diffusion Coefficients and Material/Air Partition Coefficients of SVOCs

    Science.gov (United States)

    The solid-phase diffusion coefficient (Dm) and material-air partition coefficient (Kma) are key parameters for characterizing the sources and transport of semivolatile organic compounds (SVOCs) in the indoor environment. In this work, a new experimental method was developed to es...

  12. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus Populations from Pup Counts

    Directory of Open Access Journals (Sweden)

    Mike Lonergan

    2011-01-01

    Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.

  13. Uncertainty analysis of strain modal parameters by Bayesian method using frequency response function

    Institute of Scientific and Technical Information of China (English)

    Xu Li; Yi Weijian; Zhihua Yi

    2007-01-01

    Structural strain modes are able to detect changes in local structural performance, but errors are inevitably intermixed in the measured data. In this paper, strain modal parameters are considered as random variables, and their uncertainty is analyzed by a Bayesian method based on the structural frequency response function (FRF). The estimates of strain modal parameters with maximal posterior probability are determined. Several independent measurements of the FRF of a four-story reinforced concrete frame structural model were performed in the laboratory. The ability to identify the stiffness change in a concrete column using the strain mode was verified. It is shown that the uncertainty of the natural frequency is very small. Compared with the displacement mode shape, the variations of strain mode shapes at each point are quite different. The damping ratios are more affected by the types of test systems. Except for the case where a high order strain mode does not identify local damage, the first order strain mode can provide an exact indication of the damage location.

  14. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    Science.gov (United States)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  15. Alternative method of highway traffic safety analysis for developing countries using delphi technique and Bayesian network.

    Science.gov (United States)

    Mbakwe, Anthony C; Saka, Anthony A; Choi, Keechoo; Lee, Young-Jae

    2016-08-01

    Highway traffic accidents all over the world result in more than 1.3 million fatalities annually. An alarming number of these fatalities occurs in developing countries. There are many risk factors that are associated with frequent accidents, heavy loss of lives, and property damage in developing countries. Unfortunately, poor record keeping practices are very difficult obstacle to overcome in striving to obtain a near accurate casualty and safety data. In light of the fact that there are numerous accident causes, any attempts to curb the escalating death and injury rates in developing countries must include the identification of the primary accident causes. This paper, therefore, seeks to show that the Delphi Technique is a suitable alternative method that can be exploited in generating highway traffic accident data through which the major accident causes can be identified. In order to authenticate the technique used, Korea, a country that underwent similar problems when it was in its early stages of development in addition to the availability of excellent highway safety records in its database, is chosen and utilized for this purpose. Validation of the methodology confirms the technique is suitable for application in developing countries. Furthermore, the Delphi Technique, in combination with the Bayesian Network Model, is utilized in modeling highway traffic accidents and forecasting accident rates in the countries of research. PMID:27183516

  16. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  17. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  18. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  19. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus) Populations from Pup Counts

    OpenAIRE

    Mike Lonergan; Dave Thompson; Len Thomas; Callan Duck

    2011-01-01

    1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimat...

  20. A method of spherical harmonic analysis in the geosciences via hierarchical Bayesian inference

    Science.gov (United States)

    Muir, J. B.; Tkalčić, H.

    2015-11-01

    The problem of decomposing irregular data on the sphere into a set of spherical harmonics is common in many fields of geosciences where it is necessary to build a quantitative understanding of a globally varying field. For example, in global seismology, a compressional or shear wave speed that emerges from tomographic images is used to interpret current state and composition of the mantle, and in geomagnetism, secular variation of magnetic field intensity measured at the surface is studied to better understand the changes in the Earth's core. Optimization methods are widely used for spherical harmonic analysis of irregular data, but they typically do not treat the dependence of the uncertainty estimates on the imposed regularization. This can cause significant difficulties in interpretation, especially when the best-fit model requires more variables as a result of underestimating data noise. Here, with the above limitations in mind, the problem of spherical harmonic expansion of irregular data is treated within the hierarchical Bayesian framework. The hierarchical approach significantly simplifies the problem by removing the need for regularization terms and user-supplied noise estimates. The use of the corrected Akaike Information Criterion for picking the optimal maximum degree of spherical harmonic expansion and the resulting spherical harmonic analyses are first illustrated on a noisy synthetic data set. Subsequently, the method is applied to two global data sets sensitive to the Earth's inner core and lowermost mantle, consisting of PKPab-df and PcP-P differential traveltime residuals relative to a spherically symmetric Earth model. The posterior probability distributions for each spherical harmonic coefficient are calculated via Markov Chain Monte Carlo sampling; the uncertainty obtained for the coefficients thus reflects the noise present in the real data and the imperfections in the spherical harmonic expansion.

  1. Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process.

    Science.gov (United States)

    Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R

    2016-03-30

    A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. PMID:26530518

  2. Influence of Incorporation Methods on Partitioning Behavior of Lipophilic Drugs into Various Phases of a Parenteral Lipid Emulsion

    OpenAIRE

    Sila-on, Warisada; Vardhanabhuti, Nontima; Ongpipattanakul, Boonsri; Kulvanich, Poj

    2008-01-01

    The purpose of this study was to investigate the effect of drug incorporation methods on the partitioning behavior of lipophilic drugs in parenteral lipid emulsions. Four lipophilic benzodiazepines, alprazolam, clonazepam, diazepam, and lorazepam, were used as model drugs. Two methods were used to incorporate drugs into an emulsion: dissolving the compound in the oil phase prior to emulsification (de novo emulsification), and directly adding a concentrated solution of drug in a solubilizer to...

  3. Estimates of European emissions of methyl chloroform using a Bayesian inversion method

    Directory of Open Access Journals (Sweden)

    M. Maione

    2014-03-01

    Full Text Available Methyl chloroform (MCF is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr−1 in 2002 to 0.32 Gg yr−1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr−1 in 2002 and 0.20 Gg yr−1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30–90° N emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003 to a maximum of 10.3% (in 2009 of the global MCF emissions.

  4. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  5. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming;

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...

  6. A Bayesian Method for Evaluating Passing Scores: The PPoP Curve

    Science.gov (United States)

    Wainer, Howard; Wang, X. A.; Skorupski, William P.; Bradlow, Eric T.

    2005-01-01

    In this note, we demonstrate an interesting use of the posterior distributions (and corresponding posterior samples of proficiency) that are yielded by fitting a fully Bayesian test scoring model to a complex assessment. Specifically, we examine the efficacy of the test in combination with the specific passing score that was chosen through expert…

  7. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  8. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  9. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    International Nuclear Information System (INIS)

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related

  10. Genetic Properties of Some Economic Traits in Isfahan Native Fowl Using Bayesian and REML Methods

    Directory of Open Access Journals (Sweden)

    Salehinasab M

    2015-12-01

    Full Text Available The objective of the present study was to estimate heritability values for some performance and egg quality traits of native fowl in Isfahan breeding center using REML and Bayesian approaches. The records were about 51521 and 975 for performance and egg quality traits, respectively. At the first step, variance components were estimated for body weight at hatch (BW0, body weight at 8 weeks of age (BW8, weight at sexual maturity (WSM, egg yolk weight (YW, egg Haugh unit and eggshell thickness, via REML approach using ASREML software. At the second step, the same traits were analyzed via Bayesian approach using Gibbs3f90 software. In both approaches six different animal models were applied and the best model was determined using likelihood ratio test (LRT and deviance information criterion (DIC for REML and Bayesian approaches, respectively. Heritability estimates for BW0, WSM and ST were the same in both approaches. For BW0, LRT and DIC indexes confirmed that the model consisting maternal genetic, permanent environmental and direct genetic effects was significantly better than other models. For WSM, a model consisting of maternal permanent environmental effect in addition to direct genetic effect was the best. For shell thickness, the basic model consisting direct genetic effect was the best. The results for BW8, YW and Haugh unit, were different between the two approaches. The reason behind this tiny differences was that the convergence could not be achieved for some models in REML approach and thus for these traits the Bayesian approach estimated the variance components more accurately. The results indicated that ignoring maternal effects, overestimates the direct genetic variance and heritability for most of the traits. Also, the Bayesian-based software could take more variance components into account.

  11. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  12. Determination of Aroma Compound Partition Coefficients in Aqueous, Polysaccharide, and Dairy Matrices Using the Phase Ratio Variation Method: A Review and Modeling Approach.

    Science.gov (United States)

    Heilig, Andrej; Sonne, Alina; Schieberle, Peter; Hinrichs, Jörg

    2016-06-01

    The partition of aroma compounds between a matrix and a gas phase describes an individual compound's specific affinity toward the matrix constituents affecting orthonasal sensory perception. The static headspace phase ratio variation (PRV) method has been increasingly applied by various authors to determine the equilibrium partition coefficient K in aqueous, polysaccharide, and dairy matrices. However, reported partition coefficients are difficult to relate and compare due to different experimental conditions, e.g., aroma compound selection, matrix composition, equilibration temperature. Due to its specific advantages, the PRV method is supposed to find more frequent application in the future, this Review aims to summarize, evaluate, compare, and relate the currently available data on PRV-determined partition coefficients. This process was designed to specify the potentials and the limitations as well as the consistency of the PRV method, and to identify open fields of research in aroma compound partitioning in food-related, especially dairy matrices. PMID:27182770

  13. Simple Method to Determine the Partition Coefficient of Naphthenic Acid in Oil/Water

    DEFF Research Database (Denmark)

    Bitsch-Larsen, Anders; Andersen, Simon Ivar

    2008-01-01

    The partition coefficient for technical grade naphthenic acid in water/n-decane at 295 K has been determined (K-wo = 2.1 center dot 10(-4)) using a simple experimental technique with large extraction volumes (0.09 m(3) of water). Furthermore, nonequilibrium values at different pH values are prese...

  14. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    Science.gov (United States)

    Takamizawa, Hisashi; Itoh, Hiroto; Nishiyama, Yutaka

    2016-10-01

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased. Comparing cluster IDs 2 and 6, embrittlement of high-Cu-bearing materials ( A flux effect with a higher flux range was demonstrated for cluster ID 3 comprising MTR irradiation in a high flux region (≤1 × 1013 n/cm2/s) [44]. For cluster ID 10, classification is rendered based upon flux effect, where embrittlement is accelerated in high Cu-bearing materials irradiated at lower flux levels (less than 5 × 109 n/cm2·s). This is possibly due to increased thermal equilibrium vacancies [44,45]. Per all the above considerations, it was hence ascertained that data belonging to identical cluster ID's maintain the similar embrittlement mechanism attributes.With the aim to examine the clustering versus the neutron fluence, the relationship between the Cu content representative of materials and the fluence for PWR and MTR irradiation was plotted in Fig. 13 (a) and (b). For enhancing plot clarities, data points were slightly

  15. The Selection of DNA Aptamers for Two Different Epitopes of Thrombin Was Not Due to Different Partitioning Methods

    OpenAIRE

    Wilson, Robert; Cossins, Andrew; Nicolau, Dan V.; Missailidis, Sotiris

    2013-01-01

    Nearly all aptamers identified so far for any given target molecule have been specific for the same binding site (epitope). The most notable exception to the 1 aptamer per target molecule rule is the pair of DNA aptamers that bind to different epitopes of thrombin. This communication refutes the suggestion that these aptamers exist because different partitioning methods were used when they were selected. The possibility that selection of these aptamers was biased by conflicting secondary stru...

  16. A Robust Computational Method for Coupled Liquid-liquid Phase Separation and Gas-particle Partitioning Predictions of Multicomponent Aerosols

    Science.gov (United States)

    Zuend, A.; Di Stefano, A.

    2014-12-01

    Providing efficient and reliable model predictions for the partitioning of atmospheric aerosol components between different phases (gas, liquids, solids) is a challenging problem. The partitioning of water, various semivolatile organic components, inorganic acids, bases, and salts, depends simultaneously on the chemical properties and interaction effects among all constituents of a gas + aerosol system. The effects of hygroscopic particle growth on the water contents and physical states of potentially two or more liquid and/or solid aerosol phases in turn may significantly affect multiphase chemistry, the direct effect of aerosols on climate, and the ability of specific particles to act as cloud condensation or ice nuclei. Considering the presence of a liquid-liquid phase separation in aerosol particles, which typically leads to one phase being enriched in rather hydrophobic compounds and the other phase enriched in water and dissolved electrolytes, adds a high degree of complexity to the goal of predicting the gas-particle partitioning of all components. Coupled gas-particle partitioning and phase separation methods are required to correctly account for the phase behaviour of aerosols exposed to varying environmental conditions, such as changes to relative humidity. We present new theoretical insights and a substantially improved algorithm for the reliable prediction of gas-particle partitioning at thermodynamic equilibrium based on the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) model. We introduce a new approach for the accurate prediction of the phase distribution of multiple inorganic ions between two liquid phases, constrained by charge balance, and the coupling of the liquid-liquid equilibrium model to a robust gas-particle partitioning algorithm. Such coupled models are useful for exploring the range of environmental conditions leading to complete or incomplete miscibility of aerosol constituents which will affect

  17. The Power of Principled Bayesian Methods in the Study of Stellar Evolution

    CERN Document Server

    von Hippel, Ted; Stenning, David C; Robinson, Elliot; Jeffery, Elizabeth; Stein, Nathan; Jefferys, William H; O'Malley, Erin

    2016-01-01

    It takes years of effort employing the best telescopes and instruments to obtain high-quality stellar photometry, astrometry, and spectroscopy. Stellar evolution models contain the experience of lifetimes of theoretical calculations and testing. Yet most astronomers fit these valuable models to these precious datasets by eye. We show that a principled Bayesian approach to fitting models to stellar data yields substantially more information over a range of stellar astrophysics. We highlight advances in determining the ages of star clusters, mass ratios of binary stars, limitations in the accuracy of stellar models, post-main-sequence mass loss, and the ages of individual white dwarfs. We also outline a number of unsolved problems that would benefit from principled Bayesian analyses.

  18. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  19. Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre

    2007-05-01

    Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.

  20. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  1. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  2. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    Science.gov (United States)

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. PMID:27457542

  3. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    Science.gov (United States)

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed.

  4. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    Science.gov (United States)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  5. Novel Method for Calculating a Nonsubjective Informative Prior for a Bayesian Model in Toxicology Screening: A Theoretical Framework.

    Science.gov (United States)

    Woldegebriel, Michael

    2015-11-17

    In toxicology screening (forensic, food-safety), due to several analytical errors (e.g., retention time shift, lack of repeatability in m/z scans, etc.), the ability to confidently identify/confirm a compound remains a challenge. Due to these uncertainties, a probabilistic approach is currently preferred. However, if a probabilistic approach is followed, the only statistical method that is capable of estimating the probability of whether the compound of interest (COI) is present/absent in a given sample is Bayesian statistics. Bayes' theorem can combine prior information (prior probability) with data (likelihood) to give an optimal probability (posterior probability) reflecting the presence/absence of the COI. In this work, a novel method for calculating an informative prior probability for a Bayesian model in targeted toxicology screening is introduced. In contrast to earlier proposals making use of literature citation rates and the prior knowledge of the analyst, this method presents a thorough and nonsubjective approach. The formulation approaches the probability calculation as a clustering and random draw problem that incorporates few analytical method parameters meticulously estimated to reflect sensitivity and specificity of the system. The practicality of the method has been demonstrated and validated using real data and simulated analytical techniques.

  6. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  7. A hybrid Bayesian-SVD based method to detect false alarms in PERSIANN precipitation estimation product using related physical parameters

    Science.gov (United States)

    Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.

    2016-07-01

    Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.

  8. Three-dimensional fluid flow simulation into a rectangular channel with partitions using the lattice-Boltzmann method

    Science.gov (United States)

    Boutra, Abdelkader; Ragui, Karim; Bennacer, Rachid; Benkahla, Youb K.

    2016-05-01

    In this paper, we investigate numerically the 3D dimensional laminar flow of an incompressible Newtonian fluid into a rectangular channel, including several blocks mounted on the lower and upper walls. To do so, a numerical code based on the lattice Boltzmann method is utilized and it has been validated after comparison between the present results and those of the literature. The adiabatic partitions are arranged in three different manners: in the first one; and by using two blocks, these latter are mounted the one against the other. In the second, the bottom block is disposed next to the flow entry. Whereas, in the third; three parallel (or alternative) blocks are taking place the one close to the other at an equal distance. Regarding the Reynolds number and the partitions' distance effects on the fluid flow inside the channel, our phenomenon is widely analyzed throughout streamlines and velocity profiles, with special attention to the partitions' arrangement and the global drop pressure. It is to denote that the three dimensions D3Q19 model is adopted in this work, based on a cubic lattice, where each pattern of the latter is characterized by nineteen discrete speeds. Contribution to the topical issue "Materials for Energy Harvesting, Conversion and Storage (ICOME 2015) - Elected submissions", edited by Jean-Michel Nunzi, Rachid Bennacer and Mohammed El Ganaoui

  9. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guannan [ORNL; Webster, Clayton G [ORNL; Gunzburger, Max D [ORNL

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  10. Bayesian Lensing Shear Measurement

    CERN Document Server

    Bernstein, Gary M

    2013-01-01

    We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...

  11. A FORTRAN program for the statistical analysis of incomplete time series data sets by a method of partition.

    Science.gov (United States)

    Patel, M K; Waterhouse, J P

    1993-03-01

    A program written in FORTRAN-77 which executes an analysis for periodicity of a time series data set is presented. Time series analysis now has applicability and use in a wide range of biomedical studies. The analytical method termed here a method of partition is derived from periodogram analysis, but uses the principle of analysis of variance (ANOVA). It is effective when used on incomplete data sets. An example in which a data set is made progressively more incomplete by the random removal of values demonstrates this, and a listing of the program and a sample output in both an abbreviated and a full version are given.

  12. Model Based Beamforming and Bayesian Inversion Signal Processing Methods for Seismic Localization of Underground Source

    DEFF Research Database (Denmark)

    Oh, Geok Lian

    the probability density function permits the incorporation of a priori information about the parameters, and also allow for incorporation of theoretical errors. This opens up the possibilities of application of inverse paradigm in the real-world geophysics inversion problems. In this PhD study, the Bayesian...... source. The examples show with the field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model. The simulation results of the inversion of the soil density values show that both...

  13. Fault Localization Method by Partitioning Memory Using Memory Map and the Stack for Automotive ECU Software Testing

    Directory of Open Access Journals (Sweden)

    Kwanhyo Kim

    2016-09-01

    Full Text Available Recently, the usage of the automotive Electronic Control Unit (ECU and its software in cars is increasing. Therefore, as the functional complexity of such software increases, so does the likelihood of software-related faults. Therefore, it is important to ensure the reliability of ECU software in order to ensure automobile safety. For this reason, systematic testing methods are required that can guarantee software quality. However, it is difficult to locate a fault during testing with the current ECU development system because a tester performs the black-box testing using a Hardware-in-the-Loop (HiL simulator. Consequently, developers consume a large amount of money and time for debugging because they perform debugging without any information about the location of the fault. In this paper, we propose a method for localizing the fault utilizing memory information during black-box testing. This is likely to be of use to developers who debug automotive software. In order to observe whether symbols stored in the memory have been updated, the memory is partitioned by a memory map and the stack, thus the fault candidate region is reduced. A memory map method has the advantage of being able to finely partition the memory, and the stack method can partition the memory without a memory map. We validated these methods by applying these to HiL testing of the ECU for a body control system. The preliminary results indicate that a memory map and the stack reduce the possible fault locations to 22% and 19% of the updated memory, respectively.

  14. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  15. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  16. A case study of an enhanced eutrophication model with stoichiometric zooplankton growth sub-model calibrated by Bayesian method.

    Science.gov (United States)

    Yang, Likun; Peng, Sen; Sun, Jingmei; Zhao, Xinhua; Li, Xia

    2016-05-01

    Urban lakes in China have suffered from severe eutrophication over the past several years, particularly those with relatively small areas and closed watersheds. Many efforts have been made to improve the understanding of eutrophication physiology with advanced mathematical models. However, several eutrophication models ignore zooplankton behavior and treat zooplankton as particles, which lead to the systematic errors. In this study, an eutrophication model was enhanced with a stoichiometric zooplankton growth sub-model that simulated the zooplankton predation process and the interplay among nitrogen, phosphorus, and oxygen cycles. A case study in which the Bayesian method was used to calibrate the enhanced eutrophication model parameters and to calculate the model simulation results was carried out in an urban lake in Tianjin, China. Finally, a water quality assessment was also conducted for eutrophication management. Our result suggests that (1) integration of the Bayesian method and the enhanced eutrophication model with a zooplankton feeding behavior sub-model can effectively depict the change in water quality and (2) the nutrients resulting from rainwater runoff laid the foundation for phytoplankton bloom. PMID:26780061

  17. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  18. The Partition of Unity Method for High-Order Finite Volume Schemes Using Radial Basis Functions Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Serena Morigi; Fiorella Sgallari

    2009-01-01

    This paper introduces the use of partition of unity method for the develop-ment of a high order finite volume discretization scheme on unstructured grids for solv-ing diffusion models based on partial differential equations. The unknown function and its gradient can be accurately reconstructed using high order optimal recovery based on radial basis functions. The methodology proposed is applied to the noise removal prob-lem in functional surfaces and images. Numerical results demonstrate the effectiveness of the new numerical approach and provide experimental order of convergence.

  19. Bayesian deconvolution as a method for the spectroscopy of X-rays with highly pixelated photon counting detectors

    Science.gov (United States)

    Sievers, P.; Weber, T.; Michel, T.; Klammer, J.; Büermann, L.; Anton, G.

    2012-03-01

    The energy deposition spectrum of highly pixelated photon-counting pixel detectors with a semiconductor sensor layer (e.g. silicon) differs significantly from the impinging X-ray spectrum. This is mainly due to Compton scattering, charge sharing, an energy-dependent sensor efficiency, fluorescence photons and back-scattered photons from detector parts. Therefore, the determination of the impinging X-ray spectrum from the measured distribution of the energy deposition in the detector is a non-trivial task. For the deconvolution of the measured distribution into the impinging spectrum, a set of monoenergetic response functions is needed. Those have been calculated with the Monte Carlo simulation framework ROSI, utilizing EGS4 and including all relevant physical processes in the sensor layer. We have investigated the uncertainties that spectrum reconstruction algorithms, like spectrum stripping, impose on reconstruction results. We can show that applying the Bayesian deconvolution method significantly improves the stability of the deconvolved spectrum. This results in a reduced minimum radiation flux needed for reconstruction. In this paper, we present our investigations and measurements on spectrum reconstruction for polychromatic X-ray spectra at low flux with a focus on Bayesian deconvolution.

  20. Bayesian deconvolution as a method for the spectroscopy of X-rays with highly pixelated photon counting detectors

    International Nuclear Information System (INIS)

    The energy deposition spectrum of highly pixelated photon-counting pixel detectors with a semiconductor sensor layer (e.g. silicon) differs significantly from the impinging X-ray spectrum. This is mainly due to Compton scattering, charge sharing, an energy-dependent sensor efficiency, fluorescence photons and back-scattered photons from detector parts. Therefore, the determination of the impinging X-ray spectrum from the measured distribution of the energy deposition in the detector is a non-trivial task. For the deconvolution of the measured distribution into the impinging spectrum, a set of monoenergetic response functions is needed. Those have been calculated with the Monte Carlo simulation framework ROSI, utilizing EGS4 and including all relevant physical processes in the sensor layer. We have investigated the uncertainties that spectrum reconstruction algorithms, like spectrum stripping, impose on reconstruction results. We can show that applying the Bayesian deconvolution method significantly improves the stability of the deconvolved spectrum. This results in a reduced minimum radiation flux needed for reconstruction. In this paper, we present our investigations and measurements on spectrum reconstruction for polychromatic X-ray spectra at low flux with a focus on Bayesian deconvolution.

  1. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography

    International Nuclear Information System (INIS)

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  2. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  3. 43 genes support the lungfish-coelacanth grouping related to the closest living relative of tetrapods with the Bayesian method under the coalescence model

    Directory of Open Access Journals (Sweden)

    Gras Robin

    2011-03-01

    Full Text Available Abstract Background Since the discovery of the "living fossil" in 1938, the coelacanth (Latimeria chalumnae has generally been considered to be the closest living relative of the land vertebrates, and this is still the prevailing opinion in most general biology textbooks. However, the origin of tetrapods has not been resolved for decades. Three principal hypotheses (lungfish-tetrapod, coelacanth-tetrapod, or lungfish-coelacanth sister group have been proposed. Findings We used the Bayesian method under the coalescence model with the latest published program (Bayesian Estimation of Species Trees, or BEST to perform a phylogenetic analysis for seven relevant taxa and 43 nuclear protein-coding genes with the jackknife method for taxon sub-sampling. The lungfish-coelacanth sister group was consistently reconstructed with the Bayesian method under the coalescence model in 17 out of 21 taxon sets with a Bayesian posterior probability as high as 99%. Lungfish-tetrapod was only inferred from BCLS and BACLS. Neither coelacanth-tetrapod nor lungfish-coelacanth-tetrapod was recovered out of all 21 taxon sets. Conclusions Our results provide strong evidence in favor of accepting the hypothesis that lungfishes and coelacanths form a monophyletic sister-group that is the closest living relative of tetrapods. This clade was supported by high Bayesian posterior probabilities of the branch (a lungfish-coelacanth clade and high taxon jackknife supports.

  4. Bayesian methods for the physical sciences learning from examples in astronomy and physics

    CERN Document Server

    Andreon, Stefano

    2015-01-01

    Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.

  5. Evaluation of the antibacterial residue surveillance programme in Danish pigs using Bayesian methods

    DEFF Research Database (Denmark)

    Freitas de Matos Baptista, Filipa; Alban, L.; Olsen, A. M.;

    2012-01-01

    Residues of pharmacological active substances or their metabolites might be found in food products from food-producing animals. Maximum Residue Limits for pharmacological active substances in foodstuffs of animal origin are established to assure high food safety standards. Each year, more than 20......,000 samples are analysed for the presence of antibacterial residues in Danish pigs. This corresponds to 0.1% of the size of the slaughter pig population and more than 1% of the sows slaughtered. In this study, a Bayesian model was used to evaluate the Danish surveillance system accuracy and to investigate...... the impact of a potential risk-based sampling approach to the residue surveillance programme in Danish slaughter pigs. Danish surveillance data from 2005 to 2009 and limited knowledge about true prevalence and test sensitivity and specificity were included in the model. According to the model, the true...

  6. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    Energy Technology Data Exchange (ETDEWEB)

    Opel, Oliver, E-mail: opel@uni.leuphana.de [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany); Palm, Wolf-Ulrich [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany); Steffen, Dieter [Lower Saxony Water Management, Coastal Defence and Nature Conservation Agency, Division Hannover-Hildesheim, An der Scharlake 39, 31135 Hildesheim (Germany); Ruck, Wolfgang K.L. [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany)

    2011-04-15

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 {mu}m is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: > New method for the comparison of heterogeneous sets of sediment samples. > Assessment of organic pollutants partitioning mechanisms in sediments. > Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  7. Bayesian statistics

    OpenAIRE

    Draper, D.

    2001-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  8. Applications of PDMS partitioning methods in the study of biodegradation of pyrene in the

    DEFF Research Database (Denmark)

    Tejeda-Agredano, MC; Gouliarmou, Varvara; Ortega-Calvo, JJ

    fractions to contaminated soils often causes an enhanced biodegradation and desorption of these compounds from soils. Other mechanisms proposed as operating in HS-mediated enhancements of biodegradation include the promotion of compound solubility and a direct access to HS-sorbed chemicals due...... to the physical association of bacteria and HS. Here, we propose the use of partitioning techniques using poly(dimethylsiloxane) (PDMS) to study the effect of binding of pyrene to a dissolved humic acid isolated from soil on biodegradation of this PAH by a representative soil bacterium. The application...... of these techniques in biodegradation studies may solve many questions about enhancements in diffusive mass transfer, in capacity/speciation and in dissolution. Therefore, our study may provide new insights into the effects of HS on microbial degradation of polycyclic aromatic hydrocarbons (PAHs)....

  9. A model partitioning method based on dynamic decoupling for the efficient simulation of multibody systems

    Energy Technology Data Exchange (ETDEWEB)

    Papadopoulos, Alessandro Vittorio, E-mail: alessandro.papadopoulos@control.lth.se [Lund University, Department of Automatic Control (Sweden); Leva, Alberto, E-mail: alberto.leva@polimi.it [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (Italy)

    2015-06-15

    The presence of different time scales in a dynamic model significantly hampers the efficiency of its simulation. In multibody systems the fact is particularly relevant, as the mentioned time scales may be very different, due, for example, to the coexistence of mechanical components controled by electronic drive units, and may also appear in conjunction with significant nonlinearities. This paper proposes a systematic technique, based on the principles of dynamic decoupling, to partition a model based on the time scales that are relevant for the particular simulation studies to be performed and as transparently as possible for the user. In accordance with said purpose, peculiar to the technique is its neat separation into two parts: a structural analysis of the model, which is general with respect to any possible simulation scenario, and a subsequent decoupled integration, which can conversely be (easily) tailored to the study at hand. Also, since the technique does not aim at reducing but rather at partitioning the model, the state space and the physical interpretation of the dynamic variables are inherently preserved. Moreover, the proposed analysis allows us to define some novel indices relative to the separability of the system, thereby extending the idea of “stiffness” in a way that is particularly keen to its use for the improvement of simulation efficiency, be the envisaged integration scheme monolithic, parallel, or even based on cosimulation. Finally, thanks to the way the analysis phase is conceived, the technique is naturally applicable to both linear and nonlinear models. The paper contains a methodological presentation of the proposed technique, which is related to alternatives available in the literature so as to evidence the peculiarities just sketched, and some application examples illustrating the achieved advantages and motivating the major design choice from an operational viewpoint.

  10. An estimation method for inference of gene regulatory net-work using Bayesian network with uniting of partial problems

    Directory of Open Access Journals (Sweden)

    Watanabe Yukito

    2012-01-01

    Full Text Available Abstract Background Bayesian networks (BNs have been widely used to estimate gene regulatory networks. Many BN methods have been developed to estimate networks from microarray data. However, two serious problems reduce the effectiveness of current BN methods. The first problem is that BN-based methods require huge computational time to estimate large-scale networks. The second is that the estimated network cannot have cyclic structures, even if the actual network has such structures. Results In this paper, we present a novel BN-based deterministic method with reduced computational time that allows cyclic structures. Our approach generates all the combinational triplets of genes, estimates networks of the triplets by BN, and unites the networks into a single network containing all genes. This method decreases the search space of predicting gene regulatory networks without degrading the solution accuracy compared with the greedy hill climbing (GHC method. The order of computational time is the cube of number of genes. In addition, the network estimated by our method can include cyclic structures. Conclusions We verified the effectiveness of the proposed method for all known gene regulatory networks and their expression profiles. The results demonstrate that this approach can predict regulatory networks with reduced computational time without degrading the solution accuracy compared with the GHC method.

  11. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Science.gov (United States)

    Alkhamis, Mohammad A.; Perez, Andres M.; Murtaugh, Michael P.; Wang, Xiong; Morrison, Robert B.

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  12. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak.

    Science.gov (United States)

    Alkhamis, Mohammad A; Perez, Andres M; Murtaugh, Michael P; Wang, Xiong; Morrison, Robert B

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  13. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Directory of Open Access Journals (Sweden)

    Mohammad A. Alkhamis

    2016-02-01

    Full Text Available Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control and prevention resources. Bayesian phylodynamic models have recently been used to test research hypothesis related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV and, to the authors’ knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95 and revealed significant dispersal routes (Bayes factor > 6 of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results can’t be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic

  14. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak.

    Science.gov (United States)

    Alkhamis, Mohammad A; Perez, Andres M; Murtaugh, Michael P; Wang, Xiong; Morrison, Robert B

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  15. A Bayesian Calibration-Prediction Method for Reducing Model-Form Uncertainties with Application in RANS Simulations

    CERN Document Server

    Wu, J -L; Xiao, H

    2015-01-01

    Model-form uncertainties in complex mechanics systems are a major obstacle for predictive simulations. Reducing these uncertainties is critical for stake-holders to make risk-informed decisions based on numerical simulations. For example, Reynolds-Averaged Navier-Stokes (RANS) simulations are increasingly used in mission-critical systems involving turbulent flows. However, for many practical flows the RANS predictions have large model-form uncertainties originating from the uncertainty in the modeled Reynolds stresses. Recently, a physics-informed Bayesian framework has been proposed to quantify and reduce model-form uncertainties in RANS simulations by utilizing sparse observation data. However, in the design stage of engineering systems, measurement data are usually not available. In the present work we extend the original framework to scenarios where there are no available data on the flow to be predicted. In the proposed method, we first calibrate the model discrepancy on a related flow with available dat...

  16. Using Bayesian network and AHP method as a marketing approach tools in defining tourists’ preferences

    Directory of Open Access Journals (Sweden)

    Nataša Papić-Blagojević

    2012-04-01

    Full Text Available Marketing approach is associated to market conditions and achieving long term profitability of a company by satisfying consumers’ needs. This approach in tourism does not have to be related only to promoting one touristic destination, but is associated to relation between travel agency and its clients too. It considers that travel agencies adjust their offers to their clients’ needs. In that sense, it is important to analyze the behavior of tourists in the earlier periods with consideration of their preferences. Using Bayesian network, it could be graphically displayed the connection between tourists who have similar taste and relationships between them. On the other hand, the analytic hierarchy process (AHP is used to rank tourist attractions, with also relying on past experience. In this paper we examine possible applications of these two models in tourism in Serbia. The example is hypothetical, but it will serve as a base for future research. Three types of tourism are chosen as a representative in Vojvodina: Cultural, Rural and Business tourism, because they are the bright spot of touristic development in this area. Applied on these forms, analytic hierarchy process has shown its strength in predicting tourists’ preferences.

  17. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods.

    Science.gov (United States)

    Yang, Junjun; He, Zhibin; Du, Jun; Chen, Longfei; Zhu, Xi

    2016-01-01

    In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE), particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA) in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE. PMID:26963523

  18. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods.

    Directory of Open Access Journals (Sweden)

    Junjun Yang

    Full Text Available In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE, particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE.

  19. Bayesian Network Assessment Method for Civil Aviation Safety Based on Flight Delays

    Directory of Open Access Journals (Sweden)

    Huawei Wang

    2013-01-01

    Full Text Available Flight delays and safety are the principal contradictions in the sound development of civil aviation. Flight delays often come up and induce civil aviation safety risk simultaneously. Based on flight delays, the random characteristics of civil aviation safety risk are analyzed. Flight delays have been deemed to a potential safety hazard. The change rules and characteristics of civil aviation safety risk based on flight delays have been analyzed. Bayesian networks (BN have been used to build the aviation operation safety assessment model based on flight delay. The structure and parameters learning of the model have been researched. By using BN model, some airline in China has been selected to assess safety risk of civil aviation. The civil aviation safety risk of BN model has been assessed by GeNIe software. The research results show that flight delay, which increases the safety risk of civil aviation, can be seen as incremental safety risk. The effectiveness and correctness of the model have been tested and verified.

  20. An Efficient Method for Assessing Water Quality Based on Bayesian Belief Networks

    Directory of Open Access Journals (Sweden)

    Khalil Shihab

    2014-08-01

    Full Text Available A new methodo logy is developed to analyse existing water quality monitoring networks. This methodology incorporates different aspects of monitoring, including vulnerability/probability assessment, environmental health risk, the value of information, and redundancy redu ction. The work starts with a formulation of a conceptual framework for groundwater quality monitoring to represent the methodology’s context . This work presents the development of Bayesian techniques for the assessment of groundwater quality. The primary aim is to develop a predictive model and a computer system to assess and predict the impact of pollutants on the water column. The process of the analysis begins by postulating a model in light of al l available knowledge taken from relevant phenomenon. The previous knowledge as represented by the prior distribution of the model parameters is then combined with the new data through Bayes’ theorem to yield the current knowledge represented by the posterior distribution of model parameters. This process of upd ating information about the unknown model parameters is then repeated in a sequential manner as more and more new information becomes available

  1. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    Science.gov (United States)

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  2. A Bayesian method for calculating real-time quantitative PCR calibration curves using absolute plasmid DNA standards

    Directory of Open Access Journals (Sweden)

    Haugland Richard A

    2008-02-01

    Full Text Available Abstract Background In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignored in calibration calculations and in some cases impossible to characterize. A flexible statistical method that can account for uncertainty between plasmid and genomic DNA targets, replicate testing, and experiment-to-experiment variability is needed to estimate calibration curve parameters such as intercept and slope. Here we report the use of a Bayesian approach to generate calibration curves for the enumeration of target DNA from genomic DNA samples using absolute plasmid DNA standards. Results Instead of the two traditional methods (classical and inverse, a Monte Carlo Markov Chain (MCMC estimation was used to generate single, master, and modified calibration curves. The mean and the percentiles of the posterior distribution were used as point and interval estimates of unknown parameters such as intercepts, slopes and DNA concentrations. The software WinBUGS was used to perform all simulations and to generate the posterior distributions of all the unknown parameters of interest. Conclusion The Bayesian approach defined in this study allowed for the estimation of DNA concentrations from environmental samples using absolute standard curves generated by real-time qPCR. The approach accounted for uncertainty from multiple sources such as experiment-to-experiment variation, variability between replicate measurements, as well as uncertainty introduced when employing calibration curves generated from absolute plasmid DNA standards.

  3. Nonparametric Bayesian Classification

    CERN Document Server

    Coram, M A

    2002-01-01

    A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...

  4. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  5. Development of partitioning method: engineering-scale test on partitioning process. 1. Demonstration of TRU separation by solvent extraction process experimental apparatus

    International Nuclear Information System (INIS)

    A solvent extraction process experimental apparatus was designed as an engineering scale test facility for the TRU extraction process of four-group partitioning process. This apparatus has a feature that a variable bank-stage mixer-settler fixed together 2 or 4 stages banks makes it possible to examine the solvent extraction characteristics with various sets of organic and aqueous solutions. A demonstration test used a simulated solution with La and Nd showed that the variable bank-stage mixer-settler had an expected extractability of TRU from the solution, irrespective of the number of bank-stage. In addition, the leakage and discontinuity of flow patterns of organic and aqueous solutions were not found at the joint of the banks in the mixer-settler. We may conclude that this apparatus would be useful enough to examine the extraction characteristics of TRU by various organic solvents on an engineering scale. (author)

  6. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...

  7. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  8. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  9. Carbon partitioning in photosynthesis.

    Science.gov (United States)

    Melis, Anastasios

    2013-06-01

    The work seeks to raise awareness of a fundamental problem that impacts the renewable generation of fuels and chemicals via (photo)synthetic biology. At issue is regulation of the endogenous cellular carbon partitioning between different biosynthetic pathways, over which the living cell exerts stringent control. The regulation of carbon partitioning in photosynthesis is not understood. In plants, microalgae and cyanobacteria, methods need be devised to alter photosynthetic carbon partitioning between the sugar, terpenoid, and fatty acid biosynthetic pathways, to lower the prevalence of sugar biosynthesis and correspondingly upregulate terpenoid and fatty acid hydrocarbons production in the cell. Insight from unusual but naturally occurring carbon-partitioning processes can help in the design of blueprints for improved photosynthetic fuels and chemicals production.

  10. Application of a Bayesian method to data-poor stock assessment by using Indian Ocean albacore (Thunnus alalunga) stock assessment as an example

    Institute of Scientific and Technical Information of China (English)

    GUAN Wenjiang; TANG Lin; ZHU Jiangfeng; TIAN Siquan; XU Liuxiong

    2016-01-01

    It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna (Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show (1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters; (2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase (r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore.

  11. Bayesian Adaptive Exploration

    CERN Document Server

    Loredo, T J

    2004-01-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...

  12. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  13. "K"-Balance Partitioning: An Exact Method with Applications to Generalized Structural Balance and Other Psychological Contexts

    Science.gov (United States)

    Brusco, Michael; Steinley, Douglas

    2010-01-01

    Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into "K" clusters. This "K"-balance partitioning problem also has more…

  14. The Partition of Unity Finite Element Method for the simulation of waves in air and poroelastic media.

    Science.gov (United States)

    Chazot, Jean-Daniel; Perrey-Debain, Emmanuel; Nennig, Benoit

    2014-02-01

    Recently Chazot et al. [J. Sound Vib. 332, 1918-1929 (2013)] applied the Partition of Unity Finite Element Method for the analysis of interior sound fields with absorbing materials. The method was shown to allow a substantial reduction of the number of degrees of freedom compared to the standard Finite Element Method. The work is however restricted to a certain class of absorbing materials that react like an equivalent fluid. This paper presents an extension of the method to the numerical simulation of Biot's waves in poroelastic materials. The technique relies mainly on expanding the elastic displacement as well as the fluid phase pressure using sets of plane waves which are solutions to the governing partial differential equations. To show the interest of the method for tackling problems of practical interests, poroelastic-acoustic coupling conditions as well as fixed or sliding edge conditions are presented and numerically tested. It is shown that the technique is a good candidate for solving noise control problems at medium and high frequency.

  15. A novel method for measuring the diffusion, partition and convective mass transfer coefficients of formaldehyde and VOC in building materials.

    Directory of Open Access Journals (Sweden)

    Jianyin Xiong

    Full Text Available The diffusion coefficient (D(m and material/air partition coefficient (K are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h(m. Compared to traditional methods, it has the following merits: (1 the K, D(m and h(m can be simultaneously obtained, thus is convenient to use; (2 it is time-saving, just one sorption process in airtight chamber is required; (3 the determination of h(m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h(m of semi-volatile organic compounds (SVOC by using that of VOC.

  16. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...

  17. A pseudo-statistical approach to treat choice uncertainty : the example of partitioning allocation methods

    NARCIS (Netherlands)

    Mendoza, Beltran M.A.; Heijungs, R.; Guinée, J.B.; Tukker, A.

    2016-01-01

    Purpose Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes s

  18. Sentiment analysis. An example of application and evaluation of RID dictionary and Bayesian classification methods in qualitative data analysis approach

    Directory of Open Access Journals (Sweden)

    Krzysztof Tomanek

    2014-05-01

    Full Text Available The purpose of this article is to present the basic methods for classifying text data. These methods make use of achievements earned in areas such as: natural language processing, the analysis of unstructured data. I introduce and compare two analytical techniques applied to text data. The first analysis makes use of thematic vocabulary tool (sentiment analysis. The second technique uses the idea of Bayesian classification and applies, so-called, naive Bayes algorithm. My comparison goes towards grading the efficiency of use of these two analytical techniques. I emphasize solutions that are to be used to build dictionary accurate for the task of text classification. Then, I compare supervised classification to automated unsupervised analysis’ effectiveness. These results reinforce the conclusion that a dictionary which has received good evaluation as a tool for classification should be subjected to review and modification procedures if is to be applied to new empirical material. Adaptation procedures used for analytical dictionary become, in my proposed approach, the basic step in the methodology of textual data analysis.

  19. STUDY ON AUDIO INFORMATION HIDING METHOD BASED ON MODIFIED PHASE PARTITION

    Institute of Scientific and Technical Information of China (English)

    Tong Ming; Hao Chongyang; Liu Xiaojun; Chen Yanpu

    2005-01-01

    Hiding efficiency of traditional audio information hiding methods is always low since the sentience similarity cannot be guaranteed. A new audio information hiding method is proposed in this letter which can impose the insensitivity with the audio phase for auditory and realize the information hiding through specific algorithm in order to modify local phase within the auditory perception. The algorithm is to introduce the operation of "set 1" and "set 0" for every phase vectors, then the phases must lie on the boundary of a phase area after modified. If it lies on "1" boundary, it comes by set 1 operation. If it lies on "0" boundary, it comes by set 0 operation. The results show that, compared with the legacy method, the proposed method has better auditory similarity, larger information embedding capacity and lower code error rate. As a kind of blind detect method, it fits for application scenario without channel interference.

  20. Symplectic partitioned Runge-Kutta method based on the eighth-order nearly analytic discrete operator and its wavefield simulations

    Institute of Scientific and Technical Information of China (English)

    Zhang Chao-Yuan; Ma Xiao; Yang Lei; Song Guo-Jie

    2014-01-01

    We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic wave equation. Known as the eighth-order NSPRK method, this technique uses an eighth-order accurate nearly analytic discrete (NAD) operator to discretize high-order spatial differential operators and employs a second-order SPRK method to discretize temporal derivatives. The stability criteria and numerical dispersion relations of the eighth-order NSPRK method are given by a semi-analytical method and are tested by numerical experiments. We also show the differences of the numerical dispersions between the eighth-order NSPRK method and conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG) method. The result shows that the ability of the eighth-order NSPRK method to suppress the numerical dispersion is obviously superior to that of the conventional numerical methods. In the same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 times faster than the fourth-order SPRK, and the memory requirement is only approximately 47.17%of the fourth-order NSPRK method and 49.41%of the fourth-order SPRK method, which indicates the highest computational efficiency. Modeling examples for the two-layer models such as the heterogeneous and Marmousi models show that the wavefields generated by the eighth-order NSPRK method are very clear with no visible numerical dispersion. These numerical experiments illustrate that the eighth-order NSPRK method can effectively suppress numerical dispersion when coarse grids are adopted. Therefore, this method can greatly decrease computer memory requirement and accelerate the forward modeling productivity. In general, the eighth-order NSPRK method has tremendous potential value

  1. Phycas: software for Bayesian phylogenetic analysis.

    Science.gov (United States)

    Lewis, Paul O; Holder, Mark T; Swofford, David L

    2015-05-01

    Phycas is open source, freely available Bayesian phylogenetics software written primarily in C++ but with a Python interface. Phycas specializes in Bayesian model selection for nucleotide sequence data, particularly the estimation of marginal likelihoods, central to computing Bayes Factors. Marginal likelihoods can be estimated using newer methods (Thermodynamic Integration and Generalized Steppingstone) that are more accurate than the widely used Harmonic Mean estimator. In addition, Phycas supports two posterior predictive approaches to model selection: Gelfand-Ghosh and Conditional Predictive Ordinates. The General Time Reversible family of substitution models, as well as a codon model, are available, and data can be partitioned with all parameters unlinked except tree topology and edge lengths. Phycas provides for analyses in which the prior on tree topologies allows polytomous trees as well as fully resolved trees, and provides for several choices for edge length priors, including a hierarchical model as well as the recently described compound Dirichlet prior, which helps avoid overly informative induced priors on tree length. PMID:25577605

  2. How to combine correlated data sets -- A Bayesian hyperparameter matrix method

    CERN Document Server

    Ma, Yin-Zhe

    2013-01-01

    We construct a statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by \\cite{Lahav00} and \\cite{Hobson02} which was designed to combine independent data sets. The hyperparameter matrix method we present here includes the relevant weights of multiple data sets and mutual correlations, and when the hyperparameters are marginalized over, the parameters of interest are recovered. We define a new "element-wise" product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying a classic model of fitting a straight line to two sets of data. We show that the hyperparameter matrix ...

  3. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...... of producing bias field correction results comparable to those of N3 in less computation time....

  4. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    Energy Technology Data Exchange (ETDEWEB)

    Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M. III; Tempany, Clare M.; Cormack, Robert A. [Department of Radiation Oncology, Brigham and Women' s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts 02115 (United States); Department of Radiology, Brigham and Women' s Hospital, Harvard Medical School, Boston, Massachusetts 02115 (United States); Department of Radiation Oncology, Brigham and Women' s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2012-11-15

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors

  5. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    International Nuclear Information System (INIS)

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed

  6. Extraction of Active Regions and Coronal Holes from EUV Images Using the Unsupervised Segmentation Method in the Bayesian Framework

    CERN Document Server

    Arish, Saeid; Safari, Hossein; Amiri, Ali

    2016-01-01

    The solar corona is the origin of very dynamic events that are mostly produced in active regions (AR) and coronal holes (CH). The exact location of these large-scale features can be determined by applying image-processing approaches to extreme-ultraviolet (EUV) data. We here investigate the problem of segmentation of solar EUV images into ARs, CHs, and quiet-Sun (QS) images in a firm Bayesian way. On the basis of Bayes' rule, we need to obtain both prior and likelihood models. To find the prior model of an image, we used a Potts model in non-local mode. To construct the likelihood model, we combined a mixture of a Markov-Gauss model and non-local means. After estimating labels and hyperparameters with the Gibbs estimator, cellular learning automata were employed to determine the label of each pixel. We applied the proposed method to a Solar Dynamics Observatory/ Atmospheric Imaging Assembly (SDO/AIA) dataset recorded during 2011 and found that the mean value of the filling factor of ARs is 0.032 and 0.057 for...

  7. Xantusiid "night" lizards: a puzzling phylogenetic problem revisited using likelihood-based Bayesian methods on mtDNA sequences.

    Science.gov (United States)

    Vicario, Saverio; Caccone, Adalgisa; Gauthier, Jacques

    2003-02-01

    Contentious issues in Night Lizard (Xantusiidae) evolution are revisited using Maximum Likelihood-based Bayesian methods and compared with results from Neighbor-Joining and Maximum Parsimony analyses. Fragments of three mitochondrial genes, the 12S and 16S ribosomal genes, and the cytochrome b gene, are sampled across an ingroup composed of seven xantusiid species and a 12-species outgroup chosen to bracket ancestral states for six additional clades of scleroglossan lizards. Our phylogenetic analyses afford robust support for the following conclusions: Xantusiidae is part of Scincomorpha, rather than being allied with Gekkota; Lepidophyma is sister to Xantusia, rather than to Cricosaura; Xantusia riversiana is imbedded within, rather than being sister to, other Xantusia species; and rock-morph Xantusia are not closely related to one another. Convergence related to retarded rates of growth and development, or to physical constraints imposed by living in rock crevices, may be responsible for much of the character discordance underlying conflicts in xantusiid phylogeny. Fossil-calibrated Maximum Likelihood-based divergence time estimates suggest that although the xantusiid stem may have originated in the Mesozoic, the crown clade is exclusively Tertiary in age. Thus, the clade including extant Cricosaura does not appear to have been extant during the K-T boundary bolide impact, as has been suggested. Moreover, our divergence-time estimates indicate that the xantusiid island endemics, Cricosaura typica on Cuba and Xantusia riversiana on the California Channel Islands, arrived via dispersal rather than vicariance, as previously proposed.

  8. How to combine correlated data sets-A Bayesian hyperparameter matrix method

    Science.gov (United States)

    Ma, Y.-Z.; Berndsen, A.

    2014-07-01

    We construct a “hyperparameter matrix” statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by Lahav et al. (2000) and Hobson et al. (2002) which was designed to combine independent data sets. The advantage of our method is to treat correlations between multiple data sets and gives appropriate relevant weights of multiple data sets with mutual correlations. We define a new “element-wise” product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying it to a demonstrative toy model of fitting a straight line to two sets of data. We show that the hyperparameter matrix method can detect unaccounted systematic errors or underestimated errors in the data sets. Additionally, the ratio of Bayes' factors provides a distinct indicator of the necessity of including hyperparameters. Our example shows that the likelihood we construct for joint analyses of correlated data sets can be widely applied to many astrophysical systems.

  9. Towards an SDP-based Approach to Spectral Methods: A Nearly-Linear-Time Algorithm for Graph Partitioning and Decomposition

    CERN Document Server

    Orecchia, Lorenzo

    2010-01-01

    In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \\in (0,1/2]$ and a target conductance value $\\gamma \\in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \\gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\\gamma.$ Spielman and Teng gave an $\\tilde{O}(|E|/\\gamma^{2})$-time algorithm for $f= \\sqrt{\\gamma \\log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\\tilde{O}(|E|/\\gamma)$ for $f=\\sqrt{\\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first conside...

  10. On using enriched cover function in the Partition-of-unity method for singular boundary-value problems

    Science.gov (United States)

    Liu, X.; Lee, C. K.; Fan, S. C.

    Amongst the various approaches of `meshless' method, the Partition-of-unity concept married with the traditional finite-element method, namely PUFEM, has emerged to be competitive in solving the boundary-value problems. It inherits most of the advantages from both techniques except that the beauty of being `meshless' vanishes. This paper presents an alternative approach to solve singular boundary-value problems. It follows the basic PUFEM procedures. The salient feature is to enhance the quality of the influence functions, either over one single nodal cover or multi-nodal-covers. In the vicinity of the singularity, available asymptotic analytical solution is employed to enrich the influence function. The beauty of present approach is that it facilitates easy replacement of the influence functions. In other words, it favors the `influence-function refinement' procedure in a bid to search for more accurate solutions. It is analogous to the `p-version refinement' in the traditional finite-element procedures. The present approach can yield very accurate solution without adopting refined meshes. As a result, the quantities around the singularity can be evaluated directly once the nodal values are solved. No additional post-processing is needed. Firstly, the formulation of the present PUFEM approach is described. Subsequently, illustrative examples show the application to three classical singular benchmark problems having various orders of singularity. Results obtained through mesh refinements, single-nodal-cover refinements or multi-nodal-cover refinements are compared.

  11. Parallel implementation of electronic structure eigensolver using a partitioned folded spectrum method

    CERN Document Server

    Briggs, E L; Bernholc, J

    2015-01-01

    A parallel implementation of an eigensolver designed for electronic structure calculations is presented. The method is applicable to computational tasks that solve a sequence of eigenvalue problems where the solution for a particular iteration is similar but not identical to the solution from the previous iteration. Such problems occur frequently when performing electronic structure calculations in which the eigenvectors are solutions to the Kohn-Sham equations. The eigenvectors are represented in some type of basis but the problem sizes are normally too large for direct diagonalization in that basis. Instead a subspace diagonalization procedure is employed in which matrix elements of the Hamiltonian operator are generated and the eigenvalues and eigenvectors of the resulting reduced matrix are obtained using a standard eigensolver from a package such as LAPACK or SCALAPACK. While this method works well and is widely used, the standard eigensolvers scale poorly on massively parallel computer systems for the m...

  12. baySeq: Empirical Bayesian methods for identifying differential expression in sequence count data

    Directory of Open Access Journals (Sweden)

    Hardcastle Thomas J

    2010-08-01

    Full Text Available Abstract Background High throughput sequencing has become an important technology for studying expression levels in many types of genomic, and particularly transcriptomic, data. One key way of analysing such data is to look for elements of the data which display particular patterns of differential expression in order to take these forward for further analysis and validation. Results We propose a framework for defining patterns of differential expression and develop a novel algorithm, baySeq, which uses an empirical Bayes approach to detect these patterns of differential expression within a set of sequencing samples. The method assumes a negative binomial distribution for the data and derives an empirically determined prior distribution from the entire dataset. We examine the performance of the method on real and simulated data. Conclusions Our method performs at least as well, and often better, than existing methods for analyses of pairwise differential expression in both real and simulated data. When we compare methods for the analysis of data from experimental designs involving multiple sample groups, our method again shows substantial gains in performance. We believe that this approach thus represents an important step forward for the analysis of count data from sequencing experiments.

  13. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  14. Estimation of Land Surface Temperature through Blending MODIS and AMSR-E Data with the Bayesian Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Xiaokang Kou

    2016-01-01

    Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.

  15. The overall parameter estimation of academic performance based on Bayesian method%基于Bayes方法的学生学业成绩的总体评价

    Institute of Scientific and Technical Information of China (English)

    祝翠; 刘焕彬

    2014-01-01

    Bayes方法充分利用先验信息,并综合样本信息来进行统计推断。本文将其应用于点估计、区间估计中来推断学业成绩,进而评价学习效果和教学质量。并将经典方法与Bayes方法进行对比,得出结论:在总体平均成绩的推断问题中,Bayes方法简单实用,更有说服力。%Bayesian method makes full use of prior information,and integrates sample information for statistical inference.This paper applied it to point estimation and interval estimation for inferencing academic performance,and evaluating learning effect and teaching quality.Compared with Bayesian method and classical method,we draw the conclusion:on the question of inferencing overall average grade,Bayesian method is simple and practical, more persuasive.

  16. Optimal and scalable methods to approximate the solutions of large-scale Bayesian problems: Theory and application to atmospheric inversions and data assimilation

    CERN Document Server

    Bousserez, Nicolas

    2016-01-01

    This paper provides a detailed theoretical analysis of methods to approximate the solutions of high-dimensional (>10^6) linear Bayesian problems. An optimal low-rank projection that maximizes the information content of the Bayesian inversion is proposed and efficiently constructed using a scalable randomized SVD algorithm. Useful optimality results are established for the associated posterior error covariance matrix and posterior mean approximations, which are further investigated in a numerical experiment consisting of a large-scale atmospheric tracer transport source-inversion problem. This method proves to be a robust and efficient approach to dimension reduction, as well as a natural framework to analyze the information content of the inversion. Possible extensions of this approach to the non-linear framework in the context of operational numerical weather forecast data assimilation systems based on the incremental 4D-Var technique are also discussed, and a detailed implementation of a new Randomized Incr...

  17. A Bayesian multilocus association method: allowing for higher-order interaction in association studies

    DEFF Research Database (Denmark)

    Albrechtsen, Anders; Castella, Sofie; Andersen, Gitte;

    2007-01-01

    conditions. We present a new powerful statistical model for analyzing and interpreting genomic data that influence multifactorial phenotypic traits with a complex and likely polygenic inheritance. The new method is based on Markov chain Monte Carlo (MCMC) and allows for identification of sets of SNPs...

  18. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA) methods

    OpenAIRE

    Owens Chantelle J; Owusu-Edusei Kwame

    2009-01-01

    Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, g...

  19. Development of a Bayesian method for the analysis of inertial confinement fusion experiments on the NIF

    CERN Document Server

    Gaffney, Jim A; Sonnad, Vijay; Libby, Stephen B

    2013-01-01

    The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters that are only known with limited reliability. These parameters, combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating underlying microphysics models. The result is framed as a modified $\\chi^{2}$ analysis which is easy to implement in existing analyses, and quite portable. We present a first application to a recent convergent ablator experiment performed at the NIF, and investigate the effect of variations in all physical dimensions of the target (very difficult to do using other methods). We show that for well characterised targets in which dimensions vary at the 0.5% level there is little effect, but 3% variations change the results of i...

  20. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Andrews, G;

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... an overall estimate of the causal relationship between the phenotype and the outcome, and an assessment of its heterogeneity across studies. As an example, we estimate the causal relationship of blood concentrations of C-reactive protein on fibrinogen levels using data from 11 studies. These methods provide...... a flexible framework for efficient estimation of causal relationships derived from multiple studies. Issues discussed include weak instrument bias, analysis of binary outcome data such as disease risk, missing genetic data, and the use of haplotypes....

  1. Nonlinear tracking in a diffusion process with a Bayesian filter and the finite element method

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Thygesen, Uffe Høgsbro; Madsen, Henrik

    2011-01-01

    A new approach to nonlinear state estimation and object tracking from indirect observations of a continuous time process is examined. Stochastic differential equations (SDEs) are employed to model the dynamics of the unobservable state. Tracking problems in the plane subject to boundaries...... on the state-space do not in general provide analytical solutions. A widely used numerical approach is the sequential Monte Carlo (SMC) method which relies on stochastic simulations to approximate state densities. For offline analysis, however, accurate smoothed state density and parameter estimation can......-mass filtering methods, but is computationally more advanced and generally applicable. The performance of the FE estimators in relation to SMC and to the resolution of the spatial discretization is examined empirically through simulation. A real-data case study involving fish tracking is also analysed....

  2. Bayesian Adaptive Exploration

    Science.gov (United States)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  3. A Clustering Method of Highly Dimensional Patent Data Using Bayesian Approach

    OpenAIRE

    Sunghae Jun

    2012-01-01

    Patent data have diversely technological information of any technology field. So, many companies have managed the patent data to build their RD policy. Patent analysis is an approach to the patent management. Also, patent analysis is an important tool for technology forecasting. Patent clustering is one of the works for patent analysis. In this paper, we propose an efficient clustering method of patent documents. Generally, patent data are consisted of text document. The patent documents have...

  4. Diagnostic analysis of turbulent boundary layer data by a trivariate Lagrangian partitioning method

    Energy Technology Data Exchange (ETDEWEB)

    Welsh, P.T. [Florida State Univ., Tallahassee, FL (United States)

    1994-12-31

    The rapid scientific and technological advances in meteorological theory and modeling predominantly have occurred on the large (or synoptic) scale flow characterized by the extratropical cyclone. Turbulent boundary layer flows, in contrast, have been slower in developing both theoretically and in accuracy for several reasons. There are many existing problems in boundary layer models, among them are limits to computational power available, the inability to handle countergradient fluxes, poor growth matching to real boundary layers, and inaccuracy in calculating the diffusion of scalar concentrations. Such transport errors exist within the boundary layer as well as into the free atmosphere above. This research uses a new method, which can provide insight into these problems, and ultimately improve boundary layer models. There are several potential applications of the insights provided by this approach, among them are estimation of cloud contamination of satellite remotely sensed surface parameters, improved flux and vertical transport calculations, and better understanding of the diurnal boundary layer growth process and its hysteresis cycle.

  5. On free fermions and plane partitions

    OpenAIRE

    Foda, O.; Wheeler, M.; Zuparic, M.

    2008-01-01

    We use free fermion methods to re-derive a result of Okounkov and Reshetikhin relating charged fermions to random plane partitions, and to extend it to relate neutral fermions to strict plane partitions.

  6. Efficient Bayesian Phase Estimation

    Science.gov (United States)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  7. Reliability of environmental fate modeling results for POPs based on various methods of determining the air/water partition coefficient (log KAW)

    Science.gov (United States)

    Odziomek, K.; Gajewicz, A.; Haranczyk, M.; Puzyn, T.

    2013-07-01

    Air-water partition coefficient (KAW) is one of the key parameters determining environmental behavior of Persistent Organic Pollutants (POPs). Experimentally measured values of KAW are still unavailable for majority of POPs, thus alternative methods of supplying data, including Quantitative Structure-Property Relationships (QSPR) modeling, are often in use. In this paper, applicability of two QSPR methods of predicting KAW were compared with each other in the context of further application of the predicted data in environmental transport and fate studies. According to the first (indirect) method, KAW is calculated from previously predicted values of octanol-water (KOW) and octanol-air (KOA) partition coefficients. In the second (direct) approach, KAW is calculated, based on the estimated value of Henry's law constant (KH) and then adjusted to ensure its consistency with the other two partition coefficients (KOW and KOA). Although the indirect method carries theoretically twice as much error as the direct method, when the predicted values of KAW are then utilized as an input to the environmental fate model The OECD POV and LRTP Screening Tool, ver. 2.2, the indirect method elicits much higher and therefore much more restrictive values of overall persistence (POV) and transfer efficiency (TE) than its equivalent (direct method). High uncertainties related to the application of the direct method result mainly from the necessary adjustment procedure.

  8. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  9. Partition density functional theory

    Science.gov (United States)

    Nafziger, Jonathan

    Partition density functional theory (PDFT) is a method for dividing a molecular electronic structure calculation into fragment calculations. The molecular density and energy corresponding to Kohn Sham density-functional theory (KS-DFT) may be exactly recovered from these fragments. Each fragment acts as an isolated system except for the influence of a global one-body 'partition' potential which deforms the fragment densities. In this work, the developments of PDFT are put into the context of other fragment-based density functional methods. We developed three numerical implementations of PDFT: One within the NWChem computational chemistry package using basis sets, and the other two developed from scratch using real-space grids. It is shown that all three of these programs can exactly reproduce a KS-DFT calculation via fragment calculations. The first of our in-house codes handles non-interacting electrons in arbitrary one-dimensional potentials with any number of fragments. This code is used to explore how the exact partition potential changes for different partitionings of the same system and also to study features which determine which systems yield non-integer PDFT occupations and which systems are locked into integer PDFT occupations. The second in-house code, CADMium, performs real-space calculations of diatomic molecules. Features of the exact partition potential are studied for a variety of cases and an analytical formula determining singularities in the partition potential is derived. We introduce an approximation for the non-additive kinetic energy and show how this quantity can be computed exactly. Finally a PDFT functional is developed to address the issues of static correlation and delocalization errors in approximations within DFT. The functional is applied to the dissociation of H2 + and H2.

  10. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...

  11. Generating Primes Using Partitions

    OpenAIRE

    Pittu, Ganesh Reddy

    2015-01-01

    This paper presents a new technique of generating large prime numbers using a smaller one by employing Goldbach partitions. Experiments are presented showing how this method produces candidate prime numbers that are subsequently tested using either Miller Rabin or AKS primality tests.

  12. A Bayesian method for identifying missing enzymes in predicted metabolic pathway databases

    Directory of Open Access Journals (Sweden)

    Karp Peter D

    2004-06-01

    Full Text Available Abstract Background The PathoLogic program constructs Pathway/Genome databases by using a genome's annotation to predict the set of metabolic pathways present in an organism. PathoLogic determines the set of reactions composing those pathways from the enzymes annotated in the organism's genome. Most annotation efforts fail to assign function to 40–60% of sequences. In addition, large numbers of sequences may have non-specific annotations (e.g., thiolase family protein. Pathway holes occur when a genome appears to lack the enzymes needed to catalyze reactions in a pathway. If a protein has not been assigned a specific function during the annotation process, any reaction catalyzed by that protein will appear as a missing enzyme or pathway hole in a Pathway/Genome database. Results We have developed a method that efficiently combines homology and pathway-based evidence to identify candidates for filling pathway holes in Pathway/Genome databases. Our program not only identifies potential candidate sequences for pathway holes, but combines data from multiple, heterogeneous sources to assess the likelihood that a candidate has the required function. Our algorithm emulates the manual sequence annotation process, considering not only evidence from homology searches, but also considering evidence from genomic context (i.e., is the gene part of an operon? and functional context (e.g., are there functionally-related genes nearby in the genome? to determine the posterior belief that a candidate has the required function. The method can be applied across an entire metabolic pathway network and is generally applicable to any pathway database. The program uses a set of sequences encoding the required activity in other genomes to identify candidate proteins in the genome of interest, and then evaluates each candidate by using a simple Bayes classifier to determine the probability that the candidate has the desired function. We achieved 71% precision at a

  13. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  14. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA) methods

    Science.gov (United States)

    Owusu-Edusei, Kwame; Owens, Chantelle J

    2009-01-01

    Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686

  15. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  16. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889

  17. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms.

  18. Empirical Bayesian Method for the Estimation of Literacy Rate at Sub-district Level Case Study: Sumenep District of East Java Province

    Directory of Open Access Journals (Sweden)

    A.Tuti Rumiati

    2012-02-01

    Full Text Available This paper discusses Bayesian Method of Small Area Estimation (SAE based on Binomial response variable. SAE method being developed to estimate parameter in small area due to insufficiency of sample. The case study is literacy rate estimation at sub-district level in Sumenep district, East Java Province. Literacy rate is measured by proportion of people who are able to read and write, from the population of 10 year-old or more. In the case study we used Social Economic Survey (Susenasdata collected by BPS. The SAE approach was applied since the Susenas data is not representative enough to estimate the parameters at sub-district level because it’s designed to estimate parameters in regional area (in scope of a district/city at minimum. In this research, the response variable being used was logit function trasformation of pi (the parameter of Binomial distribution. We applied direct and indirect approach for parameter estimation, both using Empirical Bayes approach. For direct estimation we used prior distribution of Beta distribution and Normal prior distribution for logit function (pi and to estimate parameter by using numerical method, i.e integration Monte Carlo. For indirect approach, we used auxiliary variables which are combinations of sex and age (which is divided into five categories. Penalized Quasi Likelihood (PQL was used to get parameter estimation of SAE model and Restricted Maximum Likelihood method (REML for MSE estimation. Instead of Bayesian approach, we are also conducting direct estimation using classical approach in order to evaluate the quality of the estimators. This research gives some findings, those are: Bayesian approach for SAE model gives the best estimation because having the lowest MSE value compares to the other methods. For the direct estimation, Bayesian approach using Beta and logit Normal prior distribution give a very similar result to the direct estimation with classical approach since the weight of is too

  19. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    Energy Technology Data Exchange (ETDEWEB)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  20. Sticking to (first) principles: quantum molecular dynamics and Bayesian probabilistic methods to simulate aquatic pollutant absorption spectra.

    Science.gov (United States)

    Trerayapiwat, Kasidet; Ricke, Nathan; Cohen, Peter; Poblete, Alex; Rudel, Holly; Eustis, Soren N

    2016-08-10

    This work explores the relationship between theoretically predicted excitation energies and experimental molar absorption spectra as they pertain to environmental aquatic photochemistry. An overview of pertinent Quantum Chemical descriptions of sunlight-driven electronic transitions in organic pollutants is presented. Second, a combined molecular dynamics (MD), time-dependent density functional theory (TD-DFT) analysis of the ultraviolet to visible (UV-Vis) absorption spectra of six model organic compounds is presented alongside accurate experimental data. The functional relationship between the experimentally observed molar absorption spectrum and the discrete quantum transitions is examined. A rigorous comparison of the accuracy of the theoretical transition energies (ΔES0→Sn) and oscillator strength (fS0→Sn) is afforded by the probabilistic convolution and deconvolution procedure described. This method of deconvolution of experimental spectra using a Gaussian Mixture Model combined with Bayesian Information Criteria (BIC) to determine the mean (μ) and standard deviation (σ) as well as the number of observed singlet to singlet transition energy state distributions. This procedure allows a direct comparison of the one-electron (quantum) transitions that are the result of quantum chemical calculations and the ensemble of non-adiabatic quantum states that produce the macroscopic effect of a molar absorption spectrum. Poor agreement between the vertical excitation energies produced from TD-DFT calculations with five different functionals (CAM-B3LYP, PBE0, M06-2X, BP86, and LC-BLYP) suggest a failure of the theory to capture the low energy, environmentally important, electronic transitions in our model organic pollutants. However, the method of explicit-solvation of the organic solute using the quantum Effective Fragment Potential (EFP) in a density functional molecular dynamics trajectory simulation shows promise as a robust model of the hydrated organic

  1. Single-cycle method for partitioning of trivalent actinides using completely incinerable reagents from nitric acid medium

    Energy Technology Data Exchange (ETDEWEB)

    Ravi, Jammu; Venkatesan, K.A.; Antony, M.P.; Srinivasan, T.G.; Rao, P.R. Vasudeva [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Fuel Chemistry Div.

    2014-10-01

    A new approach, namely 'Single-cycle method for partitioning of Minor Actinides using completely incinerable ReagenTs' (SMART), has been explored for the separation of Am(III) from Eu(III) present in nitric acid medium. The extraction behavior of Am(III) and Eu(III) in a solution of an unsymmetrical diglycolamide, N,N,-didodecyl-N',N'-dioctyl-3-oxapentane-1,5-diamide (D{sup 3}DODGA), and an acidic extractant, N,N-di-2-ethylhexyl diglycolamic acid (HDEHDGA), in n-dodecane was studied. The distribution ratio of both these metal ions in D{sup 3}DODGA-HDEHDGA/n-dodecane initially decreased with increase in the concentration of nitric acid reached a minimum at 0.1 M nitric acid followed by increase. Synergic extraction of Am(III) and Eu(III) was observed at nitric acid concentrations above 0.1 M and antagonism at lower acidities. Contrasting behavior observed at different acidities was probed by the slope analysis of the extraction data. The study revealed the involvement of both D{sup 3}DODGA and HDEHDGA during synergism and increased participation of HDEHDGA during antagonism. The stripping behavior of Am(III) and Eu(III) from the loaded organic phase was studied as a function of nitric acid, DTPA, and citric acid concentrations. The conditions needed for the mutual separation of Am(III) and Eu(III) from the loaded organic phase were optimized. Our studies revealed the possibility of separating trivalent actinides from HLLW using these completely incinerable reagents. (orig.)

  2. Creep-fatigue life prediction for different heats of Type 304 stainless steel by linear-damage rule, strain-range partitioning method, and damage-rate approach

    Energy Technology Data Exchange (ETDEWEB)

    Maiya, P.S.

    1978-07-01

    The creep-fatigue life results for five different heats of Type 304 stainless steel at 593/sup 0/C (1100/sup 0/F), generated under push-pull conditions in the axial strain-control mode, are presented. The life predictions for the various heats based on the linear-damage rule, strain-range partitioning method, and damage-rate approach are discussed. The appropriate material properties required for computation of fatigue life are also included.

  3. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    Science.gov (United States)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  4. Calculation Method for Horizontal Partition Coefficient of Simply Supported T -shaped beam%简支T梁横向分配系数计算方法

    Institute of Scientific and Technical Information of China (English)

    孙立刚

    2012-01-01

    On account of simply supported T - shaped beam bridge, the horizontal partition coefficient is cal- culated with G- M method, rigid cross beam method and rigid connected beam method and suitable methods are summarized, with certain reference value for design.%针对简支T型梁桥,采用G-M法、刚性横梁法、刚接梁法计算横向分配系数,总结了合适的计算方法,对设计工作具有一定的参考价值。

  5. Bayesian Attractor Learning

    Science.gov (United States)

    Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory

    2016-04-01

    Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.

  6. Polymers as reference partitioning phase: polymer calibration for an analytically operational approach to quantify multimedia phase partitioning

    DEFF Research Database (Denmark)

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe;

    2016-01-01

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning...... as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic...

  7. Comparison of an acetonitrile extraction/partitioning and “dispersive solid-phase extraction” method with classical multi-residue methods for the extraction of herbicide residues in barley samples

    NARCIS (Netherlands)

    Diez, C.; Traag, W.A.; Zommer, P.; Marinero, P.; Atienza, J.

    2006-01-01

    An acetonitrile/partitioning extraction and "dispersive solid-phase extraction (SPE)" method that provides high quality results with a minimum number of steps and a low solvent and glassware consumption was published in 2003. This method, suitable for the analysis of multiple classes of pesticide re

  8. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane.

    Science.gov (United States)

    Mielke, Steven L; Truhlar, Donald G

    2015-01-28

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P(-6)) to O(P(-12)), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  9. A new high-throughput method utilizing porous silica-based nano-composites for the determination of partition coefficients of drug candidates.

    Science.gov (United States)

    Yu, Chih H; Tam, Kin; Tsang, Shik C

    2011-09-01

    We show that highly porous silica-based nanoparticles prepared via micro-emulsion and sol-gel techniques are stable colloids in aqueous solution. By incorporating a magnetic core into the porous silica nano-composite, it is found that the material can be rapidly separated (precipitated) upon exposure to an external magnetic field. Alternatively, the porous silica nanoparticles without magnetic cores can be equally separated from solution by applying a high-speed centrifugation. Using these silica-based nanostructures a new high-throughput method for the determination of partition coefficient for water/n-octanol is hereby described. First, a tiny quantity of n-octanol phase is pre-absorbed in the porous silica nano-composite colloids, which allows an establishment of interface at nano-scale between the adsorbed n-octanol with the bulk aqueous phase. Organic compounds added to the mixture can therefore undergo a rapid partition between the two phases. The concentration of drug compound in the supernatant in a small vial can be determined by UV-visible absorption spectroscopy. With the adaptation of a robotic liquid handler, a high-throughput technology for the determination of partition coefficients of drug candidates can be employed for drug screening in the industry based on these nano-separation skills. The experimental results clearly suggest that this new method can provide partition coefficient values of potential drug candidates comparable to the conventional shake-flask method but requires much shorter analytical time and lesser quantity of chemicals. PMID:21780284

  10. Application of TLSER method in predicting the aqueous solubility and n-octanol/water partition coefficient of PCBs, PCDDs and PCDFs

    Institute of Scientific and Technical Information of China (English)

    HUANG Jun; YU Gang; ZHANG Zu-lin; WANG Yi-lei; ZHU Wei-hua; WU Guo-shi

    2004-01-01

    The theoretical linear solvation energy relationship(TLSER) approach was adopted to predict the aqueous solubility and noctanol/water partition coefficient of three groups of environmentally important chemicals-polychlorinated biphenyls( PCBs), polychlorinated dibenzodioxins and dibenzofurans( PCDDs and PCDFs). For each compound, five quantum parameters were calculated using AMI semiempirical molecular orbital methods and used as structure descriptors: average molecular polarizability(α), energy of the lowest unoccupied molecular orbit( ELUMO ), energy of the highest occupied molecular orbit( EHOMO ), the most positive charge on a hydrogen atom ( q + ), and the most negative atomic partial charge( q_ ) in the solute molecule. Then standard independent variables in TLSER equation was extracted and two series of quantitative equations between these quantum parameters and aqueous solubility and n-octanol/water partition coefficient were obtained by stepwise multiple linear regression(MLR) method. The developed equations have both quite high accuracy and explicit meanings. And the cross-validation test illustrated the good predictive power and stability of the established models.The results showed that TLSER could be used as a promising approach in the estimation of partition and solubility properties ofmacromolecular chemicals, such as persistent organic pollutants.

  11. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  12. The effect of different evapotranspiration methods on portraying soil water dynamics and ET partitioning in a semi-arid environment in Northwest China

    Science.gov (United States)

    Yu, Lianyu; Zeng, Yijian; Su, Zhongbo; Cai, Huanjie; Zheng, Zhen

    2016-03-01

    Different methods for assessing evapotranspiration (ET) can significantly affect the performance of land surface models in portraying soil water dynamics and ET partitioning. An accurate understanding of the impact a method has is crucial to determining the effectiveness of an irrigation scheme. Two ET methods are discussed: one is based on reference crop evapotranspiration (ET0) theory, uses leaf area index (LAI) for partitioning into soil evaporation and transpiration, and is denoted as the ETind method; the other is a one-step calculation of actual soil evaporation and potential transpiration by incorporating canopy minimum resistance and actual soil resistance into the Penman-Monteith model, and is denoted as the ETdir method. In this study, a soil water model, considering the coupled transfer of water, vapor, and heat in the soil, was used to investigate how different ET methods could affect the calculation of the soil water dynamics and ET partitioning in a crop field. Results indicate that for two different ET methods this model varied concerning the simulation of soil water content and crop evapotranspiration components, but the simulation of soil temperature agreed well with lysimeter observations, considering aerodynamic and surface resistance terms improved the ETdir method regarding simulating soil evaporation, especially after irrigation. Furthermore, the results of different crop growth scenarios indicate that the uncertainty in LAI played an important role in estimating the relative transpiration and evaporation fraction. The impact of maximum rooting depth and root growth rate on calculating ET components might increase in drying soil. The influence of maximum rooting depth was larger late in the growing season, while the influence of root growth rate dominated early in the growing season.

  13. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  14. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  15. Kullback-Leibler Divergence Approach to Partitioned Update Kalman Filter

    OpenAIRE

    Raitoharju, Matti; García-Fernández, Ángel F.; Piché, Robert

    2016-01-01

    Kalman filtering is a widely used framework for Bayesian estimation. The partitioned update Kalman filter applies a Kalman filter update in parts so that the most linear parts of measurements are applied first. In this paper, we generalize partitioned update Kalman filter, which requires the use oft the second order extended Kalman filter, so that it can be used with any Kalman filter extension. To do so, we use a Kullback-Leibler divergence approach to measure the nonlinearity of the measure...

  16. Present status of partitioning developments

    International Nuclear Information System (INIS)

    Evolution and development of the concept of partitioning of high-level liquid wastes (HLLW) in nuclear fuel reprocessing are reviewed historically from the early phase of separating useful radioisotopes from HLLW to the recent phase of eliminating hazardous nuclides such as transuranium elements for safe waste disposal. Since the criteria in determining the nuclides for elimination and the respective decontamination factors are important in the strategy of partitioning, current views on the criteria are summarized. As elimination of the transuranium is most significant in the partitioning, various methods available of separating them from fission products are evaluated. (auth.)

  17. Strength Reduction Method for Stability Analysis of Local Discontinuous Rock Mass with Iterative Method of Partitioned Finite Element and Interface Boundary Element

    Directory of Open Access Journals (Sweden)

    Tongchun Li

    2015-01-01

    element is proposed to solve the safety factor of local discontinuous rock mass. Slope system is divided into several continuous bodies and local discontinuous interface boundaries. Each block is treated as a partition of the system and contacted by discontinuous joints. The displacements of blocks are chosen as basic variables and the rigid displacements in the centroid of blocks are chosen as motion variables. The contact forces on interface boundaries and the rigid displacements to the centroid of each body are chosen as mixed variables and solved iteratively using the interface boundary equations. Flexibility matrix is formed through PFE according to the contact states of nodal pairs and spring flexibility is used to reflect the influence of weak structural plane so that nonlinear iteration is only limited to the possible contact region. With cohesion and friction coefficient reduced gradually, the states of all nodal pairs at the open or slip state for the first time are regarded as failure criterion, which can decrease the effect of subjectivity in determining safety factor. Examples are used to verify the validity of the proposed method.

  18. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  19. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  20. The Impact of Variable Degrees of Freedom and Scale Parameters in Bayesian Methods for Genomic Prediction in Chinese Simmental Beef Cattle.

    Science.gov (United States)

    Zhu, Bo; Zhu, Miao; Jiang, Jicai; Niu, Hong; Wang, Yanhui; Wu, Yang; Xu, Lingyang; Chen, Yan; Zhang, Lupei; Gao, Xue; Gao, Huijiang; Liu, Jianfeng; Li, Junya

    2016-01-01

    Three conventional Bayesian approaches (BayesA, BayesB and BayesCπ) have been demonstrated to be powerful in predicting genomic merit for complex traits in livestock. A priori, these Bayesian models assume that the non-zero SNP effects (marginally) follow a t-distribution depending on two fixed hyperparameters, degrees of freedom and scale parameters. In this study, we performed genomic prediction in Chinese Simmental beef cattle and treated degrees of freedom and scale parameters as unknown with inappropriate priors. Furthermore, we compared the modified methods (BayesFA, BayesFB and BayesFCπ) with their corresponding counterparts using simulation datasets. We found that the modified methods with distribution assumed to the two hyperparameters were beneficial for improving the predictive accuracy. Our results showed that the predictive accuracies of the modified methods were slightly higher than those of their counterparts especially for traits with low heritability and a small number of QTLs. Moreover, cross-validation analysis for three traits, namely carcass weight, live weight and tenderloin weight, in 1136 Simmental beef cattle suggested that predictive accuracy of BayesFCπ noticeably outperformed BayesCπ with the highest increase (3.8%) for live weight using the cohort masking cross-validation. PMID:27139889

  1. The Impact of Variable Degrees of Freedom and Scale Parameters in Bayesian Methods for Genomic Prediction in Chinese Simmental Beef Cattle.

    Directory of Open Access Journals (Sweden)

    Bo Zhu

    Full Text Available Three conventional Bayesian approaches (BayesA, BayesB and BayesCπ have been demonstrated to be powerful in predicting genomic merit for complex traits in livestock. A priori, these Bayesian models assume that the non-zero SNP effects (marginally follow a t-distribution depending on two fixed hyperparameters, degrees of freedom and scale parameters. In this study, we performed genomic prediction in Chinese Simmental beef cattle and treated degrees of freedom and scale parameters as unknown with inappropriate priors. Furthermore, we compared the modified methods (BayesFA, BayesFB and BayesFCπ with their corresponding counterparts using simulation datasets. We found that the modified methods with distribution assumed to the two hyperparameters were beneficial for improving the predictive accuracy. Our results showed that the predictive accuracies of the modified methods were slightly higher than those of their counterparts especially for traits with low heritability and a small number of QTLs. Moreover, cross-validation analysis for three traits, namely carcass weight, live weight and tenderloin weight, in 1136 Simmental beef cattle suggested that predictive accuracy of BayesFCπ noticeably outperformed BayesCπ with the highest increase (3.8% for live weight using the cohort masking cross-validation.

  2. Unique Path Partitions

    DEFF Research Database (Denmark)

    Bessenrodt, Christine; Olsson, Jørn Børling; Sellers, James A.

    2013-01-01

    We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions.......We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions....

  3. Metal-silicate Partitioning at High Pressure and Temperature: Experimental Methods and a Protocol to Suppress Highly Siderophile Element Inclusions.

    Science.gov (United States)

    Bennett, Neil R; Brenan, James M; Fei, Yingwei

    2015-01-01

    Estimates of the primitive upper mantle (PUM) composition reveal a depletion in many of the siderophile (iron-loving) elements, thought to result from their extraction to the core during terrestrial accretion. Experiments to investigate the partitioning of these elements between metal and silicate melts suggest that the PUM composition is best matched if metal-silicate equilibrium occurred at high pressures and temperatures, in a deep magma ocean environment. The behavior of the most highly siderophile elements (HSEs) during this process however, has remained enigmatic. Silicate run-products from HSE solubility experiments are commonly contaminated by dispersed metal inclusions that hinder the measurement of element concentrations in the melt. The resulting uncertainty over the true solubility and metal-silicate partitioning of these elements has made it difficult to predict their expected depletion in PUM. Recently, several studies have employed changes to the experimental design used for high pressure and temperature solubility experiments in order to suppress the formation of metal inclusions. The addition of Au (Re, Os, Ir, Ru experiments) or elemental Si (Pt experiments) to the sample acts to alter either the geometry or rate of sample reduction respectively, in order to avoid transient metal oversaturation of the silicate melt. This contribution outlines procedures for using the piston-cylinder and multi-anvil apparatus to conduct solubility and metal-silicate partitioning experiments respectively. A protocol is also described for the synthesis of uncontaminated run-products from HSE solubility experiments in which the oxygen fugacity is similar to that during terrestrial core-formation. Time-resolved LA-ICP-MS spectra are presented as evidence for the absence of metal-inclusions in run-products from earlier studies, and also confirm that the technique may be extended to investigate Ru. Examples are also given of how these data may be applied. PMID:26132380

  4. New video smoke detection method using Bayesian decision%一种集成贝叶斯决策的视频烟雾检测新方法

    Institute of Scientific and Technical Information of China (English)

    谢振平; 王涛; 刘渊

    2014-01-01

    The Bayesian decision method is studied to further improve the performance of detecting video smoke using Adaptive Neuro-Fuzzy Inference System(ANFIS). Smoke features are extracted from video sequences. The subtractive clustering and hybrid learning rules are used to train ANFIS. Detection outputs are determined by performing proposed Bayesian decision rules on the outputs of ANFIS. Experimental results show that the detection performance of ANFIS is better than that of other smoke detection algorithms, and the introduction of minimum risk-based Bayesian decision rules further increases the detection rate and decreases the false alarm rate, which is more valuable for practical applications.%研究将贝叶斯决策应用于自适应神经-模糊推理系统(ANFIS)的视频烟雾检测系统。提取视频烟雾特征,通过减法聚类和混合学习算法,确定并优化得到ANFIS实例,引入贝叶斯决策对ANFIS输出进行检测判别。仿真实验表明,ANFIS比其他烟雾检测算法具备更好的检测性能,而基于最小风险的贝叶斯决策可进一步提高检测率和降低虚警率,能更好地满足实际应用的需求。

  5. A cluster partitioning method determination of density matrices of solids and comparison with X-ray experiments

    CERN Document Server

    Ragot, S; Becker, P J; Ragot, Sebastien; Gillet, Jean-Michel; Becker, Pierre J

    2001-01-01

    In this paper we show that 1-electron properties such as Compton profiles and structure factors of crystals can be asymptotically retrieved through cluster-based calculations, followed by an appropriate partition of the 1-electron reduced density matrix (1RDM). This approach, conceptually simple, is checked with respects to both position and momentum spaces simultaneously for insulators and a covalent crystal. Restricting the calculations to small clusters further enables a fair description of local correlation effects in ionic compounds, which improves both Compton profiles and structure factors vs. their experimentally determined counterparts.

  6. Wavelet Space Partitioning for Symbolic Time Series Analysis

    Institute of Scientific and Technical Information of China (English)

    Venkatesh Rajagopalan; Asok Ray

    2006-01-01

    @@ A crucial step in symbolic time series analysis (STSA) of observed data is symbol sequence generation that relies on partitioning the phase-space of the underlying dynamical system. We present a novel partitioning method,called wavelet-space (WS) partitioning, as an alternative to symbolic false nearest neighbour (SFNN) partitioning.While the WS and SFNN partitioning methods have been demonstrated to yield comparable performance for anomaly detection on laboratory apparatuses, computation of WS partitioning is several orders of magnitude faster than that of the SFNN partitioning.

  7. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    ZHAO HaiBo; WANG XiuMing

    2008-01-01

    Based on the three-pheee theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-pheee theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compres-sional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium.Additionally, the three-phase theory is reduced to the Blot's theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Blot's equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  8. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Based on the three-phase theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-phase theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compressional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium. Additionally, the three-phase theory is reduced to the Biot’s theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Biot’s equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  9. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  10. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  11. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  12. Computationally efficient Bayesian tracking

    Science.gov (United States)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  13. Bayesian Games with Intentions

    OpenAIRE

    Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael

    2016-01-01

    We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  14. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  15. A default Bayesian hypothesis test for ANOVA designs

    NARCIS (Netherlands)

    R. Wetzels; R.P.P.P. Grasman; E.J. Wagenmakers

    2012-01-01

    This article presents a Bayesian hypothesis test for analysis of variance (ANOVA) designs. The test is an application of standard Bayesian methods for variable selection in regression models. We illustrate the effect of various g-priors on the ANOVA hypothesis test. The Bayesian test for ANOVA desig

  16. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t

  17. Partitioning Uncertain Workflows

    CERN Document Server

    Huberman, Bernardo A

    2015-01-01

    It is common practice to partition complex workflows into separate channels in order to speed up their completion times. When this is done within a distributed environment, unavoidable fluctuations make individual realizations depart from the expected average gains. We present a method for breaking any complex workflow into several workloads in such a way that once their outputs are joined, their full completion takes less time and exhibit smaller variance than when running in only one channel. We demonstrate the effectiveness of this method in two different scenarios; the optimization of a convex function and the transmission of a large computer file over the Internet.

  18. Relict snakes of North America and their relationships within Caenophidia, using likelihood-based Bayesian methods on mitochondrial sequences.

    Science.gov (United States)

    Pinou, Theodora; Vicario, Saverio; Marschner, Monique; Caccone, Adalgisa

    2004-08-01

    This paper focuses on the phylogenetic relationships of eight North American caenophidian snake species (Carphophis amoena, Contia tenuis, Diadophis punctatus, Farancia abacura, Farancia erytrogramma, Heterodon nasicus, Heterodon platyrhinos, and Heterodon simus) whose phylogenetic relationships remain controversial. Past studies have referred to these "relict" North American snakes either as colubrid, or as Neotropical dipsadids and/or xenodontids. Based on mitochondrial DNA ribosomal gene sequences and a likelihood-based Bayesian analysis, our study suggests that these North American snakes are not monophyletic and are nested within a group (Dipsadoidea) that contains the Dipsadidae, Xenodontidae, and Natricidae. In addition, we use the relationships proposed here to highlight putative examples of parallel evolution of hemipenial morphology among snake clades. PMID:15223038

  19. An Efficient Partitioning Method in Quadratic Placement%一种应用于二次布局的有效划分方法

    Institute of Scientific and Technical Information of China (English)

    吕勇强; 洪先龙; 侯文婷; 吴为民; 蔡懿慈

    2004-01-01

    A method of combining the MFFC clustering and hMETIS partitioning based quadratic placement algorithm is proposed. Experimental results show that it can gain good results but consume long running time.In order to cut down the running time,an improved MFFC clustering method (IMFFC) based Q-place algorithm is proposed.Comparing with the combining clustering and partitioning based method,it is much faster but with a little increase in total wire length.%提出了一种基于二次布局的结合MFFC结群和hMETIS划分的算法.实验表明:这种方法能得到很好的布局结果,但是运行消耗的时间比较长.为了缩短划分在二次布局中运行的时间,提出了一种改进的结群算法IMFFC,用它在二次布局中做划分.与前者相比较,这种方法虽然布局质量稍差,但速度更快.

  20. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  1. Strong ion-exchange centrifugal partition chromatography as an efficient method for the large-scale purification of glucosinolates.

    Science.gov (United States)

    Toribio, Alix; Nuzillard, Jean-Marc; Renault, Jean-Hugues

    2007-11-01

    The glucosinolates sinalbin and glucoraphanin were purified by strong ion-exchange displacement centrifugal partition chromatography (SIXCPC). The optimized conditions involved the biphasic solvent system ethyl acetate/n-butanol/water (3:2:5, v/v), the lipophilic anion-exchanger Aliquat 336 (trioctylmethylammonium chloride, 160 and 408 mM) and a sodium iodide solution (80 and 272 mM) as displacer. Amounts as high as 2.4 g of sinalbin and 2.6g of glucoraphanin were obtained in one step in 2.5 and 3.5h respectively, starting from 12 and 25 g of mustard and broccoli seed aqueous extracts, using a laboratory scale CPC column (200 mL inner volume). PMID:17904564

  2. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...... and edges. The nodes represent variables, which may be either discrete or continuous. An edge between two nodes A and B indicates a direct influence between the state of A and the state of B, which in some domains can also be interpreted as a causal relation. The wide-spread use of Bayesian networks...... is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  3. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  4. Phase Analysis and Identification Method for Multiphase Batch Processes with Partitioning Multi-way Principal Component Analysis (MPCA) Model%Phase Analysis and Identification Method for Multiphase Batch Processes with Partitioning Multi-way Principal Component Analysis (MPCA) Model

    Institute of Scientific and Technical Information of China (English)

    董伟威; 姚远; 高福荣

    2012-01-01

    Multi-way principal component analysis (MPCA) is the most widely utilized multivariate statistical process control method for batch processes. Previous research on MPCA has commonly agreed that it is not a suitable method for multiphase batch process analysis. In this paper, abundant phase information is revealed by way of partitioning MPCA model, and a new phase identification method based on global dynamic information is proposed. The application to injection molding shows that it is a feasible and effective method for multiphase batch process knowledge understanding, phase division and process monitoring.

  5. 基于Bayes信息融合的人为差错概率计算方法%Human error probability quantification method based on Bayesian information fusion

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 谢红卫; 宫二玲

    2011-01-01

    研究了人为差错概率的计算.首先,介绍了可用于人为差错概率计算的数据来源,主要包括:通用数据、专家数据、仿真实验数据和现场数据.然后,分析了Bayes信息融合方法的基本思想,强调了该方法的两个关键性问题:验前分布的构建和融合权重的确定.最后,构建了基于Bayes信息融合的人为差错概率计算方法.将前3种数据作为脸前信息,融合形成验前分布.使用Bayes方法完成与现场数据的数据综合,得到人为差错概率的验后分布.基于该验后分布,完成人为差错概率的计算.通过示例分析,演示了方法的使用过程,证明了方法的有效性.%The quantification of human error probability is researched. Firstly, the data resources that can be used in the quantification of human error probability are introduced, including general data, expert data, simulation data, and spot data. Their characteristics are analyzed. Secondly, the basic idea of Bayesian information fusing is analyzed. Two key prololems are emphasized, which are the formation of prior distributions and the determination of fusing weights. Finally, the new method is presented, which quantifies the human error probability based on Bayesian information fusing. The first three kinds of data are regarded as prior information to form the fused prior distribution. The Bayesian method is used to synthesize all the data and get the posterior distribution. Based on the posterior distribution, the human error probability can be quantified. An example is analyzed, which shows the process of the method and proves its validity.

  6. Carbon partitioning as validation methods for crop yields and CO2 sequestration monitoring in Asia using a photosynthetic-sterility model

    Science.gov (United States)

    Kaneko, Daijiro; Yang, Peng; Kumakura, Toshiro

    2010-10-01

    Sustainability of world crop production and food security has become uncertain. The authors have developed an environmental research system called Remote Sensing Environmental Monitor (RSEM) for treating carbon sequestration by vegetation, grain production, desertification of Eurasian grassland, and CDM afforestation/ reforestation to a background of climate change and economic growth in rising Asian nations. The RSEM system involves vegetation photosynthesis and crop yield models for grains, including land-use classification, stomatal evaluation by surface energy fluxes, and daily monitoring for early warning. This paper presents a validation method for RSEM based on carbon partitioning in plants, focusing in particular on the effects of area sizes used in crop production statistics on carbon fixation and on sterility-based corrections to accumulated carbon sequestration values simulated using the RSEM photosynthesis model. The carbonhydrate in grains has the same chemical formula as cellulose in grain plants. The method proposed by partitioning the fixed carbon in harvested grains was used to investigate estimates of the amounts of carbon fixed, using the satellite-based RSEM model.

  7. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  8. Bayesian NL interpretation and learning

    NARCIS (Netherlands)

    H. Zeevat

    2011-01-01

    Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language

  9. 贝叶斯方法在水环境系统不确定性分析中的应用述评%A Review of Bayesian Methods and Their Application in Uncertainty Analysis of Water Environmental System

    Institute of Scientific and Technical Information of China (English)

    黄凯; 张晓玲

    2012-01-01

    贝叶斯方法是解决不确定问题的新思路,评述了以贝叶斯公式、贝叶斯统计推断及贝叶斯网络为基础的贝叶斯方法在水质评价、水环境模型参数识别、水环境管理及风险决策方面的应用.贝叶斯公式可很好地解决水质评价中监测数据、水质级别、水质标准等蕴含的不确定信息.贝叶斯统计推断耦合水环境模型为模型参数识别提供新方法,其应用难点为贝叶斯后验分布的计算.介绍了后验分布的离散贝叶斯算法、传统及改进MCMC算法.贝叶斯网络在水质评价、模型预测、水环境管理及风险决策中可同时考虑多个变量的综合作用,得到影响管理决策各因素的不确定性信息,为水环境的管理决策提供科学依据.%Bayesian methods provide new ideas for solving uncertainty problems in Water environmental system. Several Bayesian methods, such as Bayesian formula, Bayesian statistical inference and Bayesian networks, are commented on applying to water quality evaluation, parameters identification of water environment model, water environment manage ment and risk decision making. Bayesian formula can solve uncertain information of monitoring data, water quality grade and standard in water quality evaluation. Bayesian statistical inference coupling the water environmental model provides a new approach for model parameter identification. The posterior distribution calculation is the key of application of Bayes ian statistical inference. The Bayesian discrete algorithms based posterior distribution, the traditional and improved MC-MC algorithms are introduced. The application of Bayesian networks to water quality assessment, model prediction, wa ter environment management and risk decision making can take multiple variable into account simultaneously. Then the uncertain information of factors influencing on management decision making is obtained, which provides the scientific ba sis for water environmental

  10. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M. (Peterson AFB, CO); Ray, J. P.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.

  11. Bayesian networks: a new method for the modeling of bibliographic knowledge: application to fall risk assessment in geriatric patients.

    Science.gov (United States)

    Lalande, Laure; Bourguignon, Laurent; Carlier, Chloé; Ducher, Michel

    2013-06-01

    Falls in geriatry are associated with important morbidity, mortality and high healthcare costs. Because of the large number of variables related to the risk of falling, determining patients at risk is a difficult challenge. The aim of this work was to validate a tool to detect patients with high risk of fall using only bibliographic knowledge. Thirty articles corresponding to 160 studies were used to modelize fall risk. A retrospective case-control cohort including 288 patients (88 ± 7 years) and a prospective cohort including 106 patients (89 ± 6 years) from two geriatric hospitals were used to validate the performances of our model. We identified 26 variables associated with an increased risk of fall. These variables were split into illnesses, medications, and environment. The combination of the three associated scores gives a global fall score. The sensitivity and the specificity were 31.4, 81.6, 38.5, and 90 %, respectively, for the retrospective and the prospective cohort. The performances of the model are similar to results observed with already existing prediction tools using model adjustment to data from numerous cohort studies. This work demonstrates that knowledge from the literature can be synthesized with Bayesian networks.

  12. An optimized Method to Identify RR Lyrae stars in the SDSS X Pan-STARRS1 Overlapping Area Using a Bayesian Generative Technique

    CERN Document Server

    Abbas, M A; Martin, N F; Kaiser, N; Burgett, W S; Huber, M E; Waters, C

    2014-01-01

    We present a method for selecting RR Lyrae (RRL) stars (or other type of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8,115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3pi survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ~77% and ~52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completene...

  13. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  14. Dynamic Bayesian diffusion estimation

    CERN Document Server

    Dedecius, K

    2012-01-01

    The rapidly increasing complexity of (mainly wireless) ad-hoc networks stresses the need of reliable distributed estimation of several variables of interest. The widely used centralized approach, in which the network nodes communicate their data with a single specialized point, suffers from high communication overheads and represents a potentially dangerous concept with a single point of failure needing special treatment. This paper's aim is to contribute to another quite recent method called diffusion estimation. By decentralizing the operating environment, the network nodes communicate just within a close neighbourhood. We adopt the Bayesian framework to modelling and estimation, which, unlike the traditional approaches, abstracts from a particular model case. This leads to a very scalable and universal method, applicable to a wide class of different models. A particularly interesting case - the Gaussian regressive model - is derived as an example.

  15. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  16. Bayesian Analysis of Multivariate Probit Models

    OpenAIRE

    Siddhartha Chib; Edward Greenberg

    1996-01-01

    This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...

  17. Stability criteria for T-S fuzzy systems with interval time-varying delays and nonlinear perturbations based on geometric progression delay partitioning method.

    Science.gov (United States)

    Chen, Hao; Zhong, Shouming; Li, Min; Liu, Xingwen; Adu-Gyamfi, Fehrs

    2016-07-01

    In this paper, a novel delay partitioning method is proposed by introducing the theory of geometric progression for the stability analysis of T-S fuzzy systems with interval time-varying delays and nonlinear perturbations. Based on the common ratio α, the delay interval is unequally separated into multiple subintervals. A newly modified Lyapunov-Krasovskii functional (LKF) is established which includes triple-integral terms and augmented factors with respect to the length of every related proportional subintervals. In addition, a recently developed free-matrix-based integral inequality is employed to avoid the overabundance of the enlargement when dealing with the derivative of the LKF. This innovative development can dramatically enhance the efficiency of obtaining the maximum upper bound of the time delay. Finally, much less conservative stability criteria are presented. Numerical examples are conducted to demonstrate the significant improvements of this proposed approach. PMID:27138648

  18. An alternative method to isolate protease and phospholipase A2 toxins from snake venoms based on partitioning of aqueous two-phase systems

    Directory of Open Access Journals (Sweden)

    GN Gómez

    2012-01-01

    Full Text Available Snake venoms are rich sources of active proteins that have been employed in the diagnosis and treatment of health disorders and antivenom therapy. Developing countries demand fast economical downstream processes for the purification of this biomolecule type without requiring sophisticated equipment. We developed an alternative, simple and easy to scale-up method, able to purify simultaneously protease and phospholipase A2 toxins from Bothrops alternatus venom. It comprises a multiple-step partition procedure with polyethylene-glycol/phosphate aqueous two-phase systems followed by a gel filtration chromatographic step. Two single bands in SDS-polyacrylamide gel electrophoresis and increased proteolytic and phospholipase A2 specific activities evidence the homogeneity of the isolated proteins.

  19. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  20. Benchmark of the non-parametric Bayesian deconvolution method implemented in the SINBAD code for X/γ rays spectra processing

    Science.gov (United States)

    Rohée, E.; Coulon, R.; Carrel, F.; Dautremer, T.; Barat, E.; Montagu, T.; Normand, S.; Jammes, C.

    2016-11-01

    Radionuclide identification and quantification are a serious concern for many applications as for in situ monitoring at nuclear facilities, laboratory analysis, special nuclear materials detection, environmental monitoring, and waste measurements. High resolution gamma-ray spectrometry based on high purity germanium diode detectors is the best solution available for isotopic identification. Over the last decades, methods have been developed to improve gamma spectra analysis. However, some difficulties remain in the analysis when full energy peaks are folded together with high ratio between their amplitudes, and when the Compton background is much larger compared to the signal of a single peak. In this context, this study deals with the comparison between a conventional analysis based on "iterative peak fitting deconvolution" method and a "nonparametric Bayesian deconvolution" approach developed by the CEA LIST and implemented into the SINBAD code. The iterative peak fit deconvolution is used in this study as a reference method largely validated by industrial standards to unfold complex spectra from HPGe detectors. Complex cases of spectra are studied from IAEA benchmark protocol tests and with measured spectra. The SINBAD code shows promising deconvolution capabilities compared to the conventional method without any expert parameter fine tuning.

  1. Gentile statistics and restricted partitions

    Indian Academy of Sciences (India)

    C S Srivatsan; M V N Murthy; R K Bhaduri

    2006-03-01

    In a recent paper (Tran et al, Ann. Phys. 311, 204 (2004)), some asymptotic number theoretical results on the partitioning of an integer were derived exploiting its connection to the quantum density of states of a many-particle system. We generalise these results to obtain an asymptotic formula for the restricted or coloured partitions $p_{k}^{s} (n)$, which is the number of partitions of an integer into the summand of th powers of integers such that each power of a given integer may occur utmost times. While the method is not rigorous, it reproduces the well-known asymptotic results for = 1 apart from yielding more general results for arbitrary values of .

  2. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few

  3. On Fuzzy Bayesian Inference

    OpenAIRE

    Frühwirth-Schnatter, Sylvia

    1990-01-01

    In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)

  4. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  5. Computation of the eigenvalues of the Schroedinger equation by symplectic and trigonometrically fitted symplectic partitioned Runge-Kutta methods

    International Nuclear Information System (INIS)

    In this Letter we present an explicit symplectic method for the numerical solution of the Schroedinger equation. We also develop a modified symplectic integrator with the trigonometrically fitted property based on this method. Our new methods are tested on the computation of the eigenvalues of the one-dimensional harmonic oscillator, the doubly anharmonic oscillator and the Morse potential

  6. An optimized method to identify RR Lyrae stars in the SDSS×Pan-STARRS1 overlapping area using a bayesian generative technique

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Mohamad; Grebel, Eva K. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, Mönchhofstr. 12-14, D-69120 Heidelberg (Germany); Martin, N. F. [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Kaiser, N.; Burgett, W. S.; Huber, M. E.; Waters, C., E-mail: mabbas@ari.uni-heidelberg.de [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States)

    2014-07-01

    We present a method for selecting RR Lyrae (RRL) stars (or other types of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3π survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ∼77% and ∼52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completeness and efficiency levels will further improve with the additional PS1 epochs (∼3 epochs per filter) that will be observed before the conclusion of the survey. A comparison between our efficiency and completeness levels using the GMM method to the efficiency and completeness levels using rectangular cuts that are commonly used yielded a significant increase in the efficiency level from ∼13% to ∼77% and an insignificant change in the completeness levels. Hence, we favor using the GMM technique in future studies. Although we develop it over the SDSS×PS1 footprint, the technique presented here would work well on any multi-band, multi-epoch survey for which the number of epochs is limited.

  7. Mobile Application Partition Method Based on Tag in Cloud Environment%云环境下基于标记的移动应用划分方法

    Institute of Scientific and Technical Information of China (English)

    樊新; 高曙

    2015-01-01

    In order to solve the resource -constrained problem to mobile devices and save the energy consumption , a tag-based mobile application partition method was proposed .It is a way to partition the mobile application according to its functional structure in advance and tag the transferable application module .The abundant resources and strong ability of cloud computing technology were employed to process the information .And combining with transfer energy consumption model , it was determined whether the mobile application tag module was transferred to the cloud to remote execution or not .At last, through wireless net-work, the cloud execution results were return so as to extend equipment resources and save the energy consumption .%针对移动设备资源受限及节省其能耗的问题,提出了一种基于标记的移动应用划分方法。该方法根据移动应用功能结构对其进行划分,将可转移执行的应用模块进行标记,利用云计算丰富的资源和强大的信息处理能力,结合转移能耗模型,决定移动应用标记模块是否转移到云端远程执行,通过无线网络,返回云端执行结果,从而达到扩展移动设备资源,节省其能耗的目的。

  8. Regenerative partition structures

    OpenAIRE

    Gnedin, Alexander; Pitman, Jim

    2004-01-01

    We consider Kingman's partition structures which are regenerative with respect to a general operation of random deletion of some part. Prototypes of this class are the Ewens partition structures which Kingman characterised by regeneration after deletion of a part chosen by size-biased sampling. We associate each regenerative partition structure with a corresponding regenerative composition structure, which (as we showed in a previous paper) can be associated in turn with a regenerative random...

  9. Nomograms for Visualization of Naive Bayesian Classifier

    OpenAIRE

    Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz

    2004-01-01

    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...

  10. Subjective Bayesian Analysis: Principles and Practice

    OpenAIRE

    Goldstein, Michael

    2006-01-01

    We address the position of subjectivism within Bayesian statistics. We argue, first, that the subjectivist Bayes approach is the only feasible method for tackling many important practical problems. Second, we describe the essential role of the subjectivist approach in scientific analysis. Third, we consider possible modifications to the Bayesian approach from a subjectivist viewpoint. Finally, we address the issue of pragmatism in implementing the subjectivist approach.

  11. Bayesian Classification in Medicine: The Transferability Question *

    OpenAIRE

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann

    1981-01-01

    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  12. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  13. Bayesian Variable Selection in Spatial Autoregressive Models

    OpenAIRE

    Jesus Crespo Cuaresma; Philipp Piribauer

    2015-01-01

    This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging tech...

  14. Fuzzy Functional Dependencies and Bayesian Networks

    Institute of Scientific and Technical Information of China (English)

    LIU WeiYi(刘惟一); SONG Ning(宋宁)

    2003-01-01

    Bayesian networks have become a popular technique for representing and reasoning with probabilistic information. The fuzzy functional dependency is an important kind of data dependencies in relational databases with fuzzy values. The purpose of this paper is to set up a connection between these data dependencies and Bayesian networks. The connection is done through a set of methods that enable people to obtain the most information of independent conditions from fuzzy functional dependencies.

  15. Bayesian Models of Brain and Behaviour

    OpenAIRE

    Penny, William

    2012-01-01

    This paper presents a review of Bayesian models of brain and behaviour. We first review the basic principles of Bayesian inference. This is followed by descriptions of sampling and variational methods for approximate inference, and forward and backward recursions in time for inference in dynamical models. The review of behavioural models covers work in visual processing, sensory integration, sensorimotor integration, and collective decision making. The review of brain models covers a range of...

  16. A new method to quantify and compare the multiple components of fitness--a study case with kelp niche partition by divergent microstage adaptations to temperature.

    Directory of Open Access Journals (Sweden)

    Vasco M N C S Vieira

    Full Text Available Management of crops, commercialized or protected species, plagues or life-cycle evolution are subjects requiring comparisons among different demographic strategies. The simpler methods fail in relating changes in vital rates with changes in population viability whereas more complex methods lack accuracy by neglecting interactions among vital rates.The difference between the fitness (evaluated by the population growth rate λ of two alternative demographies is decomposed into the contributions of the differences between the pair-wised vital rates and their interactions. This is achieved through a full Taylor expansion (i.e. remainder = 0 of the demographic model. The significance of each term is determined by permutation tests under the null hypothesis that all demographies come from the same pool.An example is given with periodic demographic matrices of the microscopic haploid phase of two kelp cryptic species observed to partition their niche occupation along the Chilean coast. The method provided clear and synthetic results showing conditional differentiation of reproduction is an important driver for their differences in fitness along the latitudinal temperature gradient. But it also demonstrated that interactions among vital rates cannot be neglected as they compose a significant part of the differences between demographies.This method allows researchers to access the effects of multiple effective changes in a life-cycle from only two experiments. Evolutionists can determine with confidence the effective causes for changes in fitness whereas population managers can determine best strategies from simpler experimental designs.

  17. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  18. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  19. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    Science.gov (United States)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  20. Bayesian segmentation of hyperspectral images

    CERN Document Server

    Mohammadpour, Adel; Mohammad-Djafari, Ali

    2007-01-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  1. Bayesian segmentation of hyperspectral images

    Science.gov (United States)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  2. Privacy Preserving Multiview Point Based BAT Clustering Algorithm and Graph Kernel Method for Data Disambiguation on Horizontally Partitioned Data

    Directory of Open Access Journals (Sweden)

    J. Anitha

    2015-06-01

    Full Text Available Data mining has been a popular research area for more than a decade due to its vast spectrum of applications. However, the popularity and wide availability of data mining tools also raised concerns about the privacy of individuals. Thus, the burden of data privacy protection falls on the shoulder of the data holder and data disambiguation problem occurs in the data matrix, anonymized data becomes less secure. All of the existing privacy preservation clustering methods performs clustering based on single point of view, which is the origin, while the latter utilizes many different viewpoints, which are objects assumed to not be in the same cluster with the two objects being measured. To solve this all of above mentioned problems, this study presents a multiview point based clustering methods for anonymized data. Before that data disambiguation problem is solved by using Ramon-Gartner Subtree Graph Kernel (RGSGK, where the weight values are assigned and kernel value is determined for disambiguated data. Obtain privacy by anonymization, where the data is encrypted with secure key is obtained by the Ring-Based Fully Homomorphic Encryption (RBFHE. In order to group the anonymize data, in this study BAT clustering method is proposed based on multiview point based similarity measurement and the proposed method is called as MVBAT. However in this paper initially distance matrix is calculated and using which similarity matrix and dissimilarity matrix is formed. The experimental result of the proposed MVBAT Clustering algorithm is compared with conventional methods in terms of the F-Measure, running time, privacy loss and utility loss. RBFHE encryption results is also compared with existing methods in terms of the communication cost for UCI machine learning datasets such as adult dataset and house dataset.

  3. Calibration of P/S amplitude ratios for seismic events in Xinjiang and its adjacent areas based on a Bayesian Kriging method

    Institute of Scientific and Technical Information of China (English)

    PAN Chang-zhou; JIN Ping; XIAO Wei-guo

    2007-01-01

    Correction maps of P/S amplitude ratios for seismic events distributed in Xinjiang, China and its adjacent areas were established using a Bayesian Kriging method for the two seismic stations WMQ and MAK. The relationship between correction maps and variations of along-path features was analyzed and the validity of applying the correction maps to improve performances of P/S discriminants for seismic discrimination was investigated. Results show that obtained correction maps can generally reflect event-station path effects upon corresponding P/S discriminants; and the correction of these effects could further reduce scatters of distance-corrected P/S measurements within earthquake and explosion populations as well as improve their discriminating performances if path effects are a significant factor of such scatters. For example, as corresponding Kriging correction map was applied, the misidentification rate of earthquakes by Pn(2~4 Hz)/Lg(2~4 Hz) at MAK was reduced from 16.3% to 5.2%.

  4. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    Science.gov (United States)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p < 0.05) between MMRG obtained with IF extracted by BICA with respect to IF extracted from measured images corrupted with spillover.

  5. 基于贝叶斯估计的直流电法探测可靠性评价%Reliability Evaluation of DC Method Based on Bayesian

    Institute of Scientific and Technical Information of China (English)

    李丰军; 翁克瑞; 龚承柱

    2012-01-01

    This paper,using probability theory,proposed a method to evaluate the reliability of DC method based on Bayesian by studying the results of DC detection.In order to properly consider all kinds of information in processing,using the mixed Beta distribution,the prior distribution parameters and inheriting factor were identified in terms of historical samples,then used the posterior distribution to determine reliability of DC method and decision-making.Finally,combined No.5 Coal Mine of PingMei,calculated the reliability of different mining areas.The results show that this method can be used to prevent and control of water disasters by coal mining enterprises,which could improve the benefit of investment.%通过研究矿井直流电法探测结果,提出了基于贝叶斯估计的直流电法探测结果可靠性评价。为了合理地考虑评价过程中各种信息,使用了混合Beta先验分布,并根据历史样本确定了先验分布参数和继承因子,然后利用后验分布分析了直流电法探测结果的可靠性。最后,结合平煤股份五矿采区实际情况,分析了不同采区使用直流电法探测后效果,结果表明,该方法可以作为煤矿企业实施直流电法探测技术依据,提高水害防治的成本投资效益。

  6. 基于格网划分的Delaunay三角剖分算法研究%Study of Massive Data Delaunay Triangulation Based on Grid Partition Method

    Institute of Scientific and Technical Information of China (English)

    李小丽; 陈花竹

    2011-01-01

    为了提高海量数据的Delaunay三角网的构网速度,本文采用格网划分的三角剖分方法,首先将数据按照线性四叉树方式划分为若干格网块,构建块内子三角网,然后按照自下而上的合并方式对块进行合并,形成全局Delaunay三角网.在此基础上,为了避免出现过小锐角的情况,通过加入约束角来对三角格网进行优化.%To raise the speed of the construction of Delaunay triangulation oriented massive data, this thesis uses the grid partition method. At first, it divides the data into certain grid tiles by quadtree method, constructs sub Delaunay triangulation. Then, it merges two triangulations from bottom up to form the whole Delaunay triangulation. On the basis of that, to avoid producing too acute angles, we give a threshold angle to improve the angles of the triangulation.

  7. Bayesian inference on proportional elections.

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  8. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  9. A parametric study on the buckling of functionally graded material plates with internal discontinuities using the partition of unity method

    OpenAIRE

    S Natarajan; Chakraborty, S.; M. Ganapathi; Subramaniam, M

    2013-01-01

    In this paper, the effect of local defects, viz., cracks and cutouts on the buckling behaviour of functionally graded material plates subjected to mechanical and thermal load is numerically studied. The internal discontinuities, viz., cracks and cutouts are represented independent of the mesh within the framework of the extended finite element method and an enriched shear flexible 4-noded quadrilateral element is used for the spatial discretization. The properties are assumed to vary only in ...

  10. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  11. Intrusion Intention Recognition Method Based on Dynamic Bayesian Networks%基于动态贝叶斯网络的入侵意图识别方法

    Institute of Scientific and Technical Information of China (English)

    吴庆涛; 王琦璟; 郑瑞娟

    2011-01-01

    在构建高层次攻击场景和处理复杂攻击时,入侵检测技术难以有效察觉入侵者的意图、识别攻击问的语义以及预测下一步攻击.为此,针对网络复杂攻击过程中的不确定性,提出一种基于动态贝叶斯网络的入侵意图识别方法,采用动态贝叶斯有向无环图实时表述攻击行为、意图与攻击目标之间的关联,应用概率推理方法预测入侵者的下一步攻击.实验结果反映入侵者的意图在入侵过程中的变化规律,验证该方法的有效性.%It is difficult to detect the intention of an intruder, identify semantics of attacks and predict further attacks effectively using intrusion detection methods in the construction of high-level attack scenario and disposal of sophisticated attack. Aiming at the uncertain question in the attack process, this paper presents an intrusion intention recognition method based on Dynamic Bayesian Networks(DBN). Directed Acyclic Graph(GAG) is used to express the connection between attack behavior, attack plan and objectives, and the probabilistic reasoning method is used to predict next attack. Experiments reflect the change law of intrusion intention in the attack process, and proves the feasibility and validity.

  12. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  13. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    Science.gov (United States)

    Ellefsen, Karl J.; Smith, David

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.

  14. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...

  15. Dynamic Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2011-01-01

    Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...

  16. Malicious Bayesian Congestion Games

    CERN Document Server

    Gairing, Martin

    2008-01-01

    In this paper, we introduce malicious Bayesian congestion games as an extension to congestion games where players might act in a malicious way. In such a game each player has two types. Either the player is a rational player seeking to minimize her own delay, or - with a certain probability - the player is malicious in which case her only goal is to disturb the other players as much as possible. We show that such games do in general not possess a Bayesian Nash equilibrium in pure strategies (i.e. a pure Bayesian Nash equilibrium). Moreover, given a game, we show that it is NP-complete to decide whether it admits a pure Bayesian Nash equilibrium. This result even holds when resource latency functions are linear, each player is malicious with the same probability, and all strategy sets consist of singleton sets. For a slightly more restricted class of malicious Bayesian congestion games, we provide easy checkable properties that are necessary and sufficient for the existence of a pure Bayesian Nash equilibrium....

  17. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  18. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Kuang-You Cheng

    2010-01-01

    Full Text Available This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  19. Statistical assignment of DNA sequences using Bayesian phylogenetics

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Huelsenbeck, John P;

    2008-01-01

    We provide a new automated statistical method for DNA barcoding based on a Bayesian phylogenetic analysis. The method is based on automated database sequence retrieval, alignment, and phylogenetic analysis using a custom-built program for Bayesian phylogenetic analysis. We show on real data that ...

  20. Water Quality Evaluation Model Using Bayesian Method Based on Entropy Weight%基于熵权法赋权的贝叶斯水质评价模型

    Institute of Scientific and Technical Information of China (English)

    赵晓慎; 张超; 王文川

    2011-01-01

    基于贝叶斯水质评价理论,运用熵权赋权法求各评价指标权重,提出了熵权法赋权的贝叶斯水质评价模型,并以南四湖下级湖徽山站和韩庄闸为例进行了水质评价.结果表明,该模型有效可行,较等权重贝叶斯法对水质所属类别分辨率更高,为水环境保护与治理提供了依据.%Based on Bayesian theory of water quality evaluation, entropy weight empowerment method is used to obtain the weight of each evaluation indicator, and entropy weight empowerment-based Bayesian model of water quality evaluation is presented. And then the proposed model is applied to water quality evaluation of Huishan station and Hanzhuang pivot station located in downstream of Nansi Lake. The results show that the proposed model is feasible and effective for water quality evaluation; compared with the equal weighted Bayesian method, the entropy weight empowerment-based method has higher resolution for categories of water quality. Thus, it provides a reliable basis for water environment protection and management.

  1. Thinning Invariant Partition Structures

    CERN Document Server

    Starr, Shannon

    2011-01-01

    A partition structure is a random point process on $[0,1]$ whose points sum to 1, almost surely. In the case that there are infinitely many points to begin with, we consider a thinning action by: first, removing points independently, such that each point survives with probability $p>0$; and, secondly, rescaling the remaining points by an overall factor to normalize the sum again to 1. We prove that the partition structures which are "thinning divisible" for a sequence of $p$'s converging to 0 are mixtures of the Poisson-Kingman partition structures. We also consider the property of being "thinning invariant" for all $p \\in (0,1)$.

  2. Estimating the spatial distribution of soil moisture based on Bayesian maximum entropy method with auxiliary data from remote sensing

    Science.gov (United States)

    Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei

    2014-10-01

    Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary

  3. Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John;

    2011-01-01

    plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...

  4. PARTITION PROPERTY OF DOMAIN DECOMPOSITION WITHOUT ELLIPTICITY

    Institute of Scientific and Technical Information of China (English)

    Mo Mu; Yun-qing Huang

    2001-01-01

    Partition property plays a central role in domain decomposition methods. Existing theory essentially assumes certain ellipticity. We prove the partition property for problems without ellipticity which are of practical importance. Example applications include implicit schemes applied to degenerate parabolic partial differential equations arising from superconductors, superfluids and liquid crystals. With this partition property, Schwarz algorithms can be applied to general non-elliptic problems with an h-independent optimal convergence rate. Application to the time-dependent Ginzburg-Landau model of superconductivity is illustrated and numerical results are presented.

  5. Convex Regression with Interpretable Sharp Partitions

    Science.gov (United States)

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.

  6. Bayesian missing data problems EM, data augmentation and noniterative computation

    CERN Document Server

    Tan, Ming T; Ng, Kai Wang

    2009-01-01

    Bayesian Missing Data Problems: EM, Data Augmentation and Noniterative Computation presents solutions to missing data problems through explicit or noniterative sampling calculation of Bayesian posteriors. The methods are based on the inverse Bayes formulae discovered by one of the author in 1995. Applying the Bayesian approach to important real-world problems, the authors focus on exact numerical solutions, a conditional sampling approach via data augmentation, and a noniterative sampling approach via EM-type algorithms. After introducing the missing data problems, Bayesian approach, and poste

  7. Bosonic Partition Functions

    CERN Document Server

    Kellerstein, M; Verbaarschot, J J M

    2016-01-01

    The behavior of quenched Dirac spectra of two-dimensional lattice QCD is consistent with spontaneous chiral symmetry breaking which is forbidden according to the Coleman-Mermin-Wagner theorem. One possible resolution of this paradox is that, because of the bosonic determinant in the partially quenched partition function, the conditions of this theorem are violated allowing for spontaneous symmetry breaking in two dimensions or less. This goes back to work by Niedermaier and Seiler on nonamenable symmetries of the hyperbolic spin chain and earlier work by two of the auhtors on bosonic partition functions at nonzero chemical potential. In this talk we discuss chiral symmetry breaking for the bosonic partition function of QCD at nonzero isospin chemical potential and a bosonic random matrix theory at imaginary chemical potential and compare the results with the fermionic counterpart. In both cases the chiral symmetry group of the bosonic partition function is noncompact.

  8. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.

  9. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199

  10. Poisson-Kingman partitions

    OpenAIRE

    Pitman, Jim

    2002-01-01

    This paper presents some general formulas for random partitions of a finite set derived by Kingman's model of random sampling from an interval partition generated by subintervals whose lengths are the points of a Poisson point process. These lengths can be also interpreted as the jumps of a subordinator, that is an increasing process with stationary independent increments. Examples include the two-parameter family of Poisson-Dirichlet models derived from the Poisson process of jumps of a stab...

  11. DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTING

    Directory of Open Access Journals (Sweden)

    Javed Ali

    2012-04-01

    Full Text Available Parallel computing systems compose task partitioning strategies in a true multiprocessing manner. Such systems share the algorithm and processing unit as computing resources which leads to highly inter process communications capabilities. The main part of the proposed algorithm is resource management unit which performs task partitioning and co-scheduling .In this paper, we present a technique for integrated task partitioning and co-scheduling on the privately owned network. We focus on real-time and non preemptive systems. A large variety of experiments have been conducted on the proposed algorithm using synthetic and real tasks. Goal of computation model is to provide a realistic representation of the costs of programming The results show the benefit of the task partitioning. The main characteristics of our method are optimal scheduling and strong link between partitioning, scheduling and communication. Some important models for task partitioning are also discussed in the paper. We target the algorithm for task partitioning which improve the inter process communication between the tasks and use the recourses of the system in the efficient manner. The proposed algorithm contributes the inter-process communication cost minimization amongst the executing processes.

  12. Incompatibility boundaries for properties of community partitions

    CERN Document Server

    Browet, Arnaud; Sarlette, Alain

    2016-01-01

    We prove the incompatibility of certain desirable properties of community partition quality functions. Our results generalize the impossibility result of [Kleinberg 2003] by considering sets of weaker properties. In particular, we use an alternative notion to solve the central issue of the consistency property. (The latter means that modifying the graph in a way consistent with a partition should not have counterintuitive effects). Our results clearly show that community partition methods should not be expected to perfectly satisfy all ideally desired properties. We then proceed to show that this incompatibility no longer holds when slightly relaxed versions of the properties are considered, and we provide in fact examples of simple quality functions satisfying these relaxed properties. An experimental study of these quality functions shows a behavior comparable to established methods in some situations, but more debatable results in others. This suggests that defining a notion of good partition in communitie...

  13. Simultaneous discovery, estimation and prediction analysis of complex traits using a bayesian mixture model.

    Directory of Open Access Journals (Sweden)

    Gerhard Moser

    2015-04-01

    Full Text Available Gene discovery, estimation of heritability captured by SNP arrays, inference on genetic architecture and prediction analyses of complex traits are usually performed using different statistical models and methods, leading to inefficiency and loss of power. Here we use a Bayesian mixture model that simultaneously allows variant discovery, estimation of genetic variance explained by all variants and prediction of unobserved phenotypes in new samples. We apply the method to simulated data of quantitative traits and Welcome Trust Case Control Consortium (WTCCC data on disease and show that it provides accurate estimates of SNP-based heritability, produces unbiased estimators of risk in new samples, and that it can estimate genetic architecture by partitioning variation across hundreds to thousands of SNPs. We estimated that, depending on the trait, 2,633 to 9,411 SNPs explain all of the SNP-based heritability in the WTCCC diseases. The majority of those SNPs (>96% had small effects, confirming a substantial polygenic component to common diseases. The proportion of the SNP-based variance explained by large effects (each SNP explaining 1% of the variance varied markedly between diseases, ranging from almost zero for bipolar disorder to 72% for type 1 diabetes. Prediction analyses demonstrate that for diseases with major loci, such as type 1 diabetes and rheumatoid arthritis, Bayesian methods outperform profile scoring or mixed model approaches.

  14. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  15. Bayesian estimation and tracking a practical guide

    CERN Document Server

    Haug, Anton J

    2012-01-01

    A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation

  16. A Bayesian Analysis of Spectral ARMA Model

    Directory of Open Access Journals (Sweden)

    Manoel I. Silvestre Bezerra

    2012-01-01

    Full Text Available Bezerra et al. (2008 proposed a new method, based on Yule-Walker equations, to estimate the ARMA spectral model. In this paper, a Bayesian approach is developed for this model by using the noninformative prior proposed by Jeffreys (1967. The Bayesian computations, simulation via Markov Monte Carlo (MCMC is carried out and characteristics of marginal posterior distributions such as Bayes estimator and confidence interval for the parameters of the ARMA model are derived. Both methods are also compared with the traditional least squares and maximum likelihood approaches and a numerical illustration with two examples of the ARMA model is presented to evaluate the performance of the procedures.

  17. Hybrid Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2012-01-01

    Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...

  18. Low-Complexity Bayesian Estimation of Cluster-Sparse Channels

    KAUST Repository

    Ballal, Tarig

    2015-09-18

    This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.

  19. Classification of Maize and Weeds by Bayesian Networks

    Science.gov (United States)

    Chapron, Michel; Oprea, Alina; Sultana, Bogdan; Assemat, Louis

    2007-11-01

    Precision Agriculture is concerned with all sorts of within-field variability, spatially and temporally, that reduces the efficacy of agronomic practices applied in a uniform way all over the field. Because of these sources of heterogeneity, uniform management actions strongly reduce the efficiency of the resource input to the crop (i.e. fertilization, water) or for the agrochemicals use for pest control (i.e. herbicide). Moreover, this low efficacy means high environmental cost (pollution) and reduced economic return for the farmer. Weed plants are one of these sources of variability for the crop, as they occur in patches in the field. Detecting the location, size and internal density of these patches, along with identification of main weed species involved, open the way to a site-specific weed control strategy, where only patches of weeds would receive the appropriate herbicide (type and dose). Herein, an automatic recognition method of vegetal species is described. First, the pixels of soil and vegetation are classified in two classes, then the vegetation part of the input image is segmented from the distance image by using the watershed method and finally the leaves of the vegetation are partitioned in two parts maize and weeds thanks to the two Bayesian networks.

  20. An introduction to Gaussian Bayesian networks.

    Science.gov (United States)

    Grzegorczyk, Marco

    2010-01-01

    The extraction of regulatory networks and pathways from postgenomic data is important for drug -discovery and development, as the extracted pathways reveal how genes or proteins regulate each other. Following up on the seminal paper of Friedman et al. (J Comput Biol 7:601-620, 2000), Bayesian networks have been widely applied as a popular tool to this end in systems biology research. Their popularity stems from the tractability of the marginal likelihood of the network structure, which is a consistent scoring scheme in the Bayesian context. This score is based on an integration over the entire parameter space, for which highly expensive computational procedures have to be applied when using more complex -models based on differential equations; for example, see (Bioinformatics 24:833-839, 2008). This chapter gives an introduction to reverse engineering regulatory networks and pathways with Gaussian Bayesian networks, that is Bayesian networks with the probabilistic BGe scoring metric [see (Geiger and Heckerman 235-243, 1995)]. In the BGe model, the data are assumed to stem from a Gaussian distribution and a normal-Wishart prior is assigned to the unknown parameters. Gaussian Bayesian network methodology for analysing static observational, static interventional as well as dynamic (observational) time series data will be described in detail in this chapter. Finally, we apply these Bayesian network inference methods (1) to observational and interventional flow cytometry (protein) data from the well-known RAF pathway to evaluate the global network reconstruction accuracy of Bayesian network inference and (2) to dynamic gene expression time series data of nine circadian genes in Arabidopsis thaliana to reverse engineer the unknown regulatory network topology for this domain. PMID:20824469

  1. Learning Bayesian Networks from Correlated Data

    Science.gov (United States)

    Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola

    2016-05-01

    Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.

  2. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  3. 基于故障贝叶斯网的边坡垮塌事故风险评估方法研究%Fault bayesian network based method for risk assessment of slope collapse accident

    Institute of Scientific and Technical Information of China (English)

    谢洪涛

    2012-01-01

    This paper is aimed at converting fault trees to fault Bayesian networks, and solving the main causes and probability of slope collapse accidents. For slope excavation and support system in which the causal relationship is uncertain, description with probability is more reasonable. The fault tree logic gates do not have this capability because il is described in the logic of certainty. Bayesian network is regarded as uncertain knowledge representation and reasoning of the most effective theoretical model. In this paper, referring to the qualitative analysis of slope collapse accident with Fault Tree Analysis (FTA) done by previous researchers, a Bayesian network conversion from FTA is carried out. Based on the fault tree of slope collapse, a corresponding fault Bayesian network model of the slope collapse is established. To establish Bayesian network, the causal relationship between all variables are analyzed based on their prior probability. When new evidence becomes available, the posterior probabilities of a set of variables of structure condition can be updated, which has practical value for evaluation of the structure. Applying the fault Bayesian network model, the probability of risk event of slope collapse are calculated. The basic events are rearranged based on the importance according to the importance analysis to find out the most influential potential factor for the occurrence of slope collapse accident. The result shows that the fault Bayesian network based method could obtain more additional information. Furthermore, the network can help to make account of the changing conditions of nodes induced by the variation of any other nodes of networks, which the fault tree approach failed to do so. Therefore, it can be said that the Bayesian network approach can be taken as a good substitute for fault tree approach for hazard assessment with promising perspective of application.%针对故障树分析方法在风险评估中的局限性,研究了故障贝叶

  4. Bayesian multiple target tracking

    CERN Document Server

    Streit, Roy L

    2013-01-01

    This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements

  5. Bayesian and frequentist inequality tests

    OpenAIRE

    David M. Kaplan; Zhuo, Longhao

    2016-01-01

    Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...

  6. Polymers as Reference Partitioning Phase: Polymer Calibration for an Analytically Operational Approach To Quantify Multimedia Phase Partitioning.

    Science.gov (United States)

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp

    2016-06-01

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases. PMID:27115830

  7. Polymers as Reference Partitioning Phase: Polymer Calibration for an Analytically Operational Approach To Quantify Multimedia Phase Partitioning.

    Science.gov (United States)

    Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp

    2016-06-01

    Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases.

  8. Simulation based bayesian econometric inference: principles and some recent computational advances.

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); H.K. van Dijk (Herman); R.D. van Oest (Rutger)

    2007-01-01

    textabstractIn this paper we discuss several aspects of simulation based Bayesian econometric inference. We start at an elementary level on basic concepts of Bayesian analysis; evaluating integrals by simulation methods is a crucial ingredient in Bayesian inference. Next, the most popular and well-

  9. A New Method of Accelerated Bayesian Inference for Comparable Mass Binaries in both Ground and Space-Based Gravitational Wave Astronomy

    CERN Document Server

    Porter, Edward K

    2014-01-01

    With the advance in computational resources, Bayesian inference is increasingly becoming the standard tool of practise in GW astronomy. However, algorithms such as Markov Chain Monte Carlo (MCMC) require a large number of iterations to guarantee convergence to the target density. Each chain demands a large number of evaluations of the likelihood function, and in the case of a Hessian MCMC, calculations of the Fisher information matrix for use as a proposal distribution. As each iteration requires the generation of at least one gravitational waveform, we very quickly reach a point of exclusion for current Bayesian algorithms, especially for low mass systems where the length of the waveforms is large and the waveform generation time is on the order of seconds. This suddenly demands a timescale of many weeks for a single MCMC. As each likelihood and Fisher information matrix calculation requires the evaluation of noise-weighted scalar products, we demonstrate that by using the linearity of integration, and the f...

  10. Dimensionality reduction in Bayesian estimation algorithms

    Directory of Open Access Journals (Sweden)

    G. W. Petty

    2013-03-01

    Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  11. Bayesian calibration for forensic age estimation.

    Science.gov (United States)

    Ferrante, Luigi; Skrami, Edlira; Gesuita, Rosaria; Cameriere, Roberto

    2015-05-10

    Forensic medicine is increasingly called upon to assess the age of individuals. Forensic age estimation is mostly required in relation to illegal immigration and identification of bodies or skeletal remains. A variety of age estimation methods are based on dental samples and use of regression models, where the age of an individual is predicted by morphological tooth changes that take place over time. From the medico-legal point of view, regression models, with age as the dependent random variable entail that age tends to be overestimated in the young and underestimated in the old. To overcome this bias, we describe a new full Bayesian calibration method (asymmetric Laplace Bayesian calibration) for forensic age estimation that uses asymmetric Laplace distribution as the probability model. The method was compared with three existing approaches (two Bayesian and a classical method) using simulated data. Although its accuracy was comparable with that of the other methods, the asymmetric Laplace Bayesian calibration appears to be significantly more reliable and robust in case of misspecification of the probability model. The proposed method was also applied to a real dataset of values of the pulp chamber of the right lower premolar measured on x-ray scans of individuals of known age. PMID:25645903

  12. Governance mechanisms, managerial’s commitment bias and firm’s investment decision escalation: Failure of firm?s crises communication: Bayesian network method

    OpenAIRE

    Hamza Fadhila; Azouzi Mohamed Ali; Jarboui Anis

    2014-01-01

    This paper studies the role of governance mechanisms, CEO‟s cognitive characteristics and firms‟ financial features in justifying the CEO‟s escalatory behavior in firm‟s investment decision. This study aims to provide evidence as to whether managers consider the persuasive influence of governance mechanisms and the firm‟s financial indicators to persevere his initial investment decision while he notes a high level of commitment bias. The proposed model of this paper uses Bayesian Network Meth...

  13. Bayesian Approach to Neuro-Rough Models for Modelling HIV

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes a new neuro-rough model for modelling the risk of HIV from demographic data. The model is formulated using Bayesian framework and trained using Markov Chain Monte Carlo method and Metropolis criterion. When the model was tested to estimate the risk of HIV infection given the demographic data it was found to give the accuracy of 62% as opposed to 58% obtained from a Bayesian formulated rough set model trained using Markov chain Monte Carlo method and 62% obtained from a Bayesian formulated multi-layered perceptron (MLP) model trained using hybrid Monte. The proposed model is able to combine the accuracy of the Bayesian MLP model and the transparency of Bayesian rough set model.

  14. Bayesian Models of Graphs, Arrays and Other Exchangeable Random Structures.

    Science.gov (United States)

    Orbanz, Peter; Roy, Daniel M

    2015-02-01

    The natural habitat of most Bayesian methods is data represented by exchangeable sequences of observations, for which de Finetti's theorem provides the theoretical foundation. Dirichlet process clustering, Gaussian process regression, and many other parametric and nonparametric Bayesian models fall within the remit of this framework; many problems arising in modern data analysis do not. This article provides an introduction to Bayesian models of graphs, matrices, and other data that can be modeled by random structures. We describe results in probability theory that generalize de Finetti's theorem to such data and discuss their relevance to nonparametric Bayesian modeling. With the basic ideas in place, we survey example models available in the literature; applications of such models include collaborative filtering, link prediction, and graph and network analysis. We also highlight connections to recent developments in graph theory and probability, and sketch the more general mathematical foundation of Bayesian methods for other types of data beyond sequences and arrays. PMID:26353253

  15. Bayesian Dark Knowledge

    NARCIS (Netherlands)

    A. Korattikara; V. Rathod; K. Murphy; M. Welling

    2015-01-01

    We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple ap

  16. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  17. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  18. Matrix string partition function

    CERN Document Server

    Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre

    1998-01-01

    We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.

  19. Bayesian Fusion of Multi-Band Images

    CERN Document Server

    Wei, Qi; Tourneret, Jean-Yves

    2013-01-01

    In this paper, a Bayesian fusion technique for remotely sensed multi-band images is presented. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical consideration is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced in the Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques. In particular, low spatial resolution hyperspectral and mult...

  20. Bayesian inference of the metazoan phylogeny

    DEFF Research Database (Denmark)

    Glenner, Henrik; Hansen, Anders J; Sørensen, Martin V;

    2004-01-01

    been the only feasible combined approach but is highly sensitive to long-branch attraction. Recent development of stochastic models for discrete morphological characters and computationally efficient methods for Bayesian inference has enabled combined molecular and morphological data analysis...... with rigorous statistical approaches less prone to such inconsistencies. We present the first statistically founded analysis of a metazoan data set based on a combination of morphological and molecular data and compare the results with a traditional parsimony analysis. Interestingly, the Bayesian analyses...... such as the ecdysozoans and lophotrochozoans. Parsimony, on the contrary, shows conflicting results, with morphology being congruent to the Bayesian results and the molecular data set producing peculiarities that are largely reflected in the combined analysis....

  1. Event generator tuning using Bayesian optimization

    CERN Document Server

    Ilten, Philip; Yang, Yunjie

    2016-01-01

    Monte Carlo event generators contain a large number of parameters that must be determined by comparing the output of the generator with experimental data. Generating enough events with a fixed set of parameter values to enable making such a comparison is extremely CPU intensive, which prohibits performing a simple brute-force grid-based tuning of the parameters. Bayesian optimization is a powerful method designed for such black-box tuning applications. In this article, we show that Monte Carlo event generator parameters can be accurately obtained using Bayesian optimization and minimal expert-level physics knowledge. A tune of the PYTHIA 8 event generator using $e^+e^-$ events, where 20 parameters are optimized, can be run on a modern laptop in just two days. Combining the Bayesian optimization approach with expert knowledge should enable producing better tunes in the future, by making it faster and easier to study discrepancies between Monte Carlo and experimental data.

  2. A Bayesian Concept Learning Approach to Crowdsourcing

    DEFF Research Database (Denmark)

    Viappiani, Paolo Renato; Zilles, Sandra; Hamilton, Howard J.;

    2011-01-01

    We develop a Bayesian approach to concept learning for crowdsourcing applications. A probabilistic belief over possible concept definitions is maintained and updated according to (noisy) observations from experts, whose behaviors are modeled using discrete types. We propose recommendation...... techniques, inference methods, and query selection strategies to assist a user charged with choosing a configuration that satisfies some (partially known) concept. Our model is able to simultaneously learn the concept definition and the types of the experts. We evaluate our model with simulations, showing...... that our Bayesian strategies are effective even in large concept spaces with many uninformative experts....

  3. A Nonparametric Bayesian Model for Nested Clustering.

    Science.gov (United States)

    Lee, Juhee; Müller, Peter; Zhu, Yitan; Ji, Yuan

    2016-01-01

    We propose a nonparametric Bayesian model for clustering where clusters of experimental units are determined by a shared pattern of clustering another set of experimental units. The proposed model is motivated by the analysis of protein activation data, where we cluster proteins such that all proteins in one cluster give rise to the same clustering of patients. That is, we define clusters of proteins by the way that patients group with respect to the corresponding protein activations. This is in contrast to (almost) all currently available models that use shared parameters in the sampling model to define clusters. This includes in particular model based clustering, Dirichlet process mixtures, product partition models, and more. We show results for two typical biostatistical inference problems that give rise to clustering. PMID:26519174

  4. Single channel signal component separation using Bayesian estimation

    Institute of Scientific and Technical Information of China (English)

    Cai Quanwei; Wei Ping; Xiao Xianci

    2007-01-01

    A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.

  5. Bayesian Estimation Supersedes the "t" Test

    Science.gov (United States)

    Kruschke, John K.

    2013-01-01

    Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…

  6. Bayesian Estimation of Thermonuclear Reaction Rates

    CERN Document Server

    Iliadis, Christian; Coc, Alain; Timmes, Frank; Starrfield, Sumner

    2016-01-01

    The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied in the past to this problem, all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extra-solar planets, gravitational waves, and type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present the first astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the d(p,$\\gamma$)$^3$He, $^3$He($^3$He,2p)$^4$He, and $^3$He($\\alpha$,$\\gamma$)$^7$Be reactions,...

  7. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...

  8. Partitions with Initial Repetitions

    Institute of Scientific and Technical Information of China (English)

    George E. ANDREWS

    2009-01-01

    A variety of interesting connections with modular forms, mock theta functions and Rogers-Ramanujan type identities arise in consideration of partitions in which the smaller integers are repeated as summands more often than the larger summands. In particular, this concept leads to new interpre-tations of the Rogers-Selberg identities and Bailey's modulus 9 identities.

  9. 3-Layered Bayesian Model Using in Text Classification

    Directory of Open Access Journals (Sweden)

    Chang Jiayu

    2013-01-01

    Full Text Available Naive Bayesian is one of quite effective classification methods in all of the text disaggregated models. Usually, the computed result will be large deviation from normal, with the reason of attribute relevance and so on. This study embarked from the degree of correlation, defined the node’s degree as well as the relations between nodes, proposed a 3-layered Bayesian Model. According to the conditional probability recurrence formula, the theory support of the 3-layered Bayesian Model is obtained. According to the theory analysis and the empirical datum contrast to the Naive Bayesian, the model has better attribute collection and classify. It can be also promoted to the Multi-layer Bayesian Model using in text classification.

  10. Semisupervised learning using Bayesian interpretation: application to LS-SVM.

    Science.gov (United States)

    Adankon, Mathias M; Cheriet, Mohamed; Biem, Alain

    2011-04-01

    Bayesian reasoning provides an ideal basis for representing and manipulating uncertain knowledge, with the result that many interesting algorithms in machine learning are based on Bayesian inference. In this paper, we use the Bayesian approach with one and two levels of inference to model the semisupervised learning problem and give its application to the successful kernel classifier support vector machine (SVM) and its variant least-squares SVM (LS-SVM). Taking advantage of Bayesian interpretation of LS-SVM, we develop a semisupervised learning algorithm for Bayesian LS-SVM using our approach based on two levels of inference. Experimental results on both artificial and real pattern recognition problems show the utility of our method.

  11. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    , and exercises are included for the reader to check his/her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...

  12. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented on model construction and verification, modeling techniques and tricks, learning......Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...... sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning...

  13. On partitions avoiding right crossings

    OpenAIRE

    Yan, Sherry H. F.; Xu, Yuexiao

    2011-01-01

    Recently, Chen et al. derived the generating function for partitions avoiding right nestings and posed the problem of finding the generating function for partitions avoiding right crossings. In this paper, we derive the generating function for partitions avoiding right crossings via an intermediate structure of partial matchings avoiding 2-right crossings and right nestings. We show that there is a bijection between partial matchings avoiding 2-right crossing and right nestings and partitions...

  14. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  15. Bayesian community detection

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel N

    2012-01-01

    Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....

  16. Bayesian Generalized Rating Curves

    OpenAIRE

    Helgi Sigurðarson 1985

    2014-01-01

    A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...

  17. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  18. Partition Function of Interacting Calorons Ensemble

    CERN Document Server

    Deldar, Sedigheh

    2015-01-01

    We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.

  19. Partition function of interacting calorons ensemble

    Science.gov (United States)

    Deldar, S.; Kiamari, M.

    2016-01-01

    We present a method for computing the partition function of a caloron ensemble taking into account the interaction of calorons. We focus on caloron-Dirac string interaction and show that the metric that Diakonov and Petrov offered, works well in the limit where this interaction occurs. We suggest computing the correlation function of two polyakov loops by applying Ewald's method.

  20. Partitioning SAT Instances for Distributed Solving

    Science.gov (United States)

    Hyvärinen, Antti E. J.; Junttila, Tommi; Niemelä, Ilkka

    In this paper we study the problem of solving hard propositional satisfiability problem (SAT) instances in a computing grid or cloud, where run times and communication between parallel running computations are limited.We study analytically an approach where the instance is partitioned iteratively into a tree of subproblems and each node in the tree is solved in parallel.We present new methods for constructing partitions which combine clause learning and lookahead. The methods are incorporated into the iterative approach and its performance is demonstrated with an extensive comparison against the best sequential solvers in the SAT competition 2009 as well as against two efficient parallel solvers.