WorldWideScience

Sample records for bayesian partition method

  1. Partitioning net ecosystem exchange of CO2: A comparison of a Bayesian/isotope approach to environmental regression methods

    Science.gov (United States)

    Zobitz, J. M.; Burns, S. P.; OgéE, J.; Reichstein, M.; Bowling, D. R.

    2007-09-01

    Separation of the net ecosystem exchange of CO2 (F) into its component fluxes of net photosynthesis (FA) and nonfoliar respiration (FR) is important in understanding the physical and environmental controls on these fluxes, and how these fluxes may respond to environmental change. In this paper, we evaluate a partitioning method based on a combination of stable isotopes of CO2 and Bayesian optimization in the context of partitioning methods based on regressions with environmental variables. We combined high-resolution measurements of stable carbon isotopes of CO2, ecosystem fluxes, and meteorological variables with a Bayesian parameter optimization approach to estimate FA and FR in a subalpine forest in Colorado, United States, over the course of 104 days during summer 2003. Results were generally in agreement with the independent environmental regression methods of Reichstein et al. (2005a) and Yi et al. (2004). Half-hourly posterior parameter estimates of FA and FR derived from the Bayesian/isotopic method showed a strong diurnal pattern in both, consistent with established gross photosynthesis (GEE) and total ecosystem respiration (TER) relationships. Isotope-derived FA was functionally dependent on light, but FR exhibited the expected temperature dependence only when the prior estimates for FR were temperature-based. Examination of the posterior correlation matrix revealed that the available data were insufficient to independently resolve all the Bayesian-estimated parameters in our model. This could be due to a small isotopic disequilibrium (?) between FA and FR, poor characterization of whole-canopy photosynthetic discrimination or the isotopic flux (isoflux, analogous to net ecosystem exchange of 13CO2). The positive sign of ? indicates that FA was more enriched in 13C than FR. Possible reasons for this are discussed in the context of recent literature.

  2. Development of partitioning method

    International Nuclear Information System (INIS)

    A partitioning method has been developed under the concepts of separation of nuclides in high level nuclear fuel reprocessing liquid waste according to their half lives and radioactive toxicity and of disposal of them by suitable methods. In the partitioning process, which has been developed in JAERI, adoption of solvent extraction process with DIDPA (di-isodecyl phosphoric acid) has been studied for actinides separation. The present paper mainly describes studies on back extraction behavior of Np(IV), Pu(IV) and U(VI) in DIDPA. Most experiments were carried out according to following procedure. These actinides were extracted from 0.5 M nitric acid with DIDPA, where nitric acid concentration in HLW is expected to be adjusted to this value prior to actinides extraction in the partitioning process, and back-extracted with various reagents such as oxalic acid. The experimental results show that distribution ratios of Np(IV) and Pu(IV) can be reduced to less than unity with 1 M oxalic acid and those of U(VI) and Np(IV) with 5 M phosphoric acid. From results of these studies and previous research on Am and Cm, following possibilities were confirmed ; U, Pu, Np, Am and Cm, which are major actinides in HLW, can be extracted simultaneously with DIDPA, and they can be removed from DIDPA with various reagents. (nitric acid for Am and Cm, oxalic acid for Np and Pu, and phosphoric acid for U respectively). (author)

  3. Development of partitioning method

    International Nuclear Information System (INIS)

    A partitioning method has been developed under the concepts of separating radioactive nuclides from a high-level waste according to their half lives and radioactive toxicity, and of disposing the waste safely. The partitioning test using about 18 liters (--220Ci) of the fuel reprocessing waste prepared at PNC has been started in October of 1982. In this test the behavior of radioactive nuclides was made clear. The present paper describes chemical behavior of non-radioactive elements contained in the high-level liquid waste in the extraction with di-isodecyl phosphoric acid (DIDPA). Distribution ratios of most of metal ions for DIDPA were less than 0.05, except that those of Mo, Zr and Fe were higher than 7. Ferric ion could not be back-extracted with 4 M HNO3, but with 0.5 M (COOH)2. In the extractiion with DIDPA, the third phase, which causes closing the settling banks or the flow paths in a mixer settler, was formed when the ferric ion concentration was over 0.02 M. This unfavorable phenomenon, however, was found to be suppressed by diluting the ferric ion concentration to lower than 0.01 M or by reducing ferric ion to ferrous ion. (author)

  4. Predicting mTOR inhibitors with a classifier using recursive partitioning and Naive Bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Ling Wang

    Full Text Available BACKGROUND: Mammalian target of rapamycin (mTOR is a central controller of cell growth, proliferation, metabolism, and angiogenesis. Thus, there is a great deal of interest in developing clinical drugs based on mTOR. In this paper, in silico models based on multi-scaffolds were developed to predict mTOR inhibitors or non-inhibitors. METHODS: First 1,264 diverse compounds were collected and categorized as mTOR inhibitors and non-inhibitors. Two methods, recursive partitioning (RP and naïve Bayesian (NB, were used to build combinatorial classification models of mTOR inhibitors versus non-inhibitors using physicochemical descriptors, fingerprints, and atom center fragments (ACFs. RESULTS: A total of 253 models were constructed and the overall predictive accuracies of the best models were more than 90% for both the training set of 964 and the external test set of 300 diverse compounds. The scaffold hopping abilities of the best models were successfully evaluated through predicting 37 new recently published mTOR inhibitors. Compared with the best RP and Bayesian models, the classifier based on ACFs and Bayesian shows comparable or slightly better in performance and scaffold hopping abilities. A web server was developed based on the ACFs and Bayesian method (http://rcdd.sysu.edu.cn/mtor/. This web server can be used to predict whether a compound is an mTOR inhibitor or non-inhibitor online. CONCLUSION: In silico models were constructed to predict mTOR inhibitors using recursive partitioning and naïve Bayesian methods, and a web server (mTOR Predictor was also developed based on the best model results. Compound prediction or virtual screening can be carried out through our web server. Moreover, the favorable and unfavorable fragments for mTOR inhibitors obtained from Bayesian classifiers will be helpful for lead optimization or the design of new mTOR inhibitors.

  5. A Bayesian Approach to the Partitioning of Workflows

    CERN Document Server

    Chua, Freddy C

    2015-01-01

    When partitioning workflows in realistic scenarios, the knowledge of the processing units is often vague or unknown. A naive approach to addressing this issue is to perform many controlled experiments for different workloads, each consisting of multiple number of trials in order to estimate the mean and variance of the specific workload. Since this controlled experimental approach can be quite costly in terms of time and resources, we propose a variant of the Gibbs Sampling algorithm that uses a sequence of Bayesian inference updates to estimate the processing characteristics of the processing units. Using the inferred characteristics of the processing units, we are able to determine the best way to split a workflow for processing it in parallel with the lowest expected completion time and least variance.

  6. Bayesian Methods and Universal Darwinism

    OpenAIRE

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...

  7. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  8. Development of partitioning method

    International Nuclear Information System (INIS)

    The present paper describes the examination of the possibility to improve denitration and extraction processes by adding oxalic acid in the partitioning process which has been developed for the purpose of separating high-level liquid waste (HLW) into a few groups of elements. First, the effect of oxalic acid in the denitration of HLW was examined to reduce the amount of the precipitate formed during the denitration. As a result, it was found that it was possible to reduce the precipitation of molybdenum, zirconium and tellurium. However, some elements precipitated at any concentration of oxalic acid. The addition of oxalic acid increased the amounts of precipitates of neodymium which was the representative of transuranic elements and strontium which was a troublesome element because of its heat generation. At the extraction process with DIDPA (diisodecyl phosphoric acid), oxalic acid was expected to prevent the third phase formation caused by iron, by making a complex with iron. However, the result showed that oxalic acid did not suppress the extraction of iron. The addition of oxalic acid was no effects on preventing the third phase formation. The influence of the presence of iron on the oxalate precipitation of rare earths was also examined in the present study. (author)

  9. Development of partitioning method

    International Nuclear Information System (INIS)

    The literature survey was carried out on the amount of natural resources, behaviors in reprocessing process and in separation and recovery methods of the platinum group elements and technetium which are contained in spent fuel. The essential results are described below. (1) The platinum group elements, which are contained in spent fuel, are quantitatively limited, compared with total demand for them in Japan. And estimated separation and recovery cost is rather high. In spite of that, development of these techniques is considered to be very important because the supply of these elements is almost from foreign resources in Japan. (2) For recovery of these elements, studies of recovery from undisolved residue and from high level liquid waste (HLLW) also seem to be required. (3) As separation and recovery methods, following techniques are considered to be effective; lead extraction, liquid metal extraction, solvent extraction, ion-exchange, adsorption, precipitation, distillation, electrolysis or their combination. (4) But each of these methods has both advantages and disadvantages. So development of such processes largely depends on future works. (author) 94 refs

  10. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  11. Spatial Clustering of Curves with Functional Covariates: A Bayesian Partitioning Model with Application to Spectra Radiance in Climate Study

    OpenAIRE

    Zhang, Zhen; Lim, Chae Young; Maiti, Tapabrata; Kato, Seiji

    2016-01-01

    In climate change study, the infrared spectral signatures of climate change have recently been conceptually adopted, and widely applied to identifying and attributing atmospheric composition change. We propose a Bayesian hierarchical model for spatial clustering of the high-dimensional functional data based on the effects of functional covariates and local features. We couple the functional mixed-effects model with a generalized spatial partitioning method for: (1) producing spatially contigu...

  12. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  13. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  14. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring;

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL models had similar accuracy and bias as GBLUP method but use of...

  15. Approximate path integral methods for partition functions

    International Nuclear Information System (INIS)

    We review several approximate methods for evaluating quantum mechanical partition functions with the goal of obtaining a method that is easy to implement for multidimensional systems but accurately incorporates quantum mechanical corrections to classical partition functions. A particularly promising method is one based upon an approximation to the path integral expression of the partition function. In this method, the partition-function expression has the ease of evaluation of a classical partition function, and quantum mechanical effects are included by a weight function. Anharmonicity is included exactly in the classical Boltzmann average and local quadratic expansions around the centroid of the quantum paths yield a simple analytic form for the quantum weight function. We discuss the relationship between this expression and previous approximate methods and present numerical comparisons for model one-dimensional potentials and for accurate three-dimensional vibrational force fields for H2O and SO2

  16. Spatially Partitioned Embedded Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2013-10-30

    We study spatially partitioned embedded Runge--Kutta (SPERK) schemes for partial differential equations (PDEs), in which each of the component schemes is applied over a different part of the spatial domain. Such methods may be convenient for problems in which the smoothness of the solution or the magnitudes of the PDE coefficients vary strongly in space. We focus on embedded partitioned methods as they offer greater efficiency and avoid the order reduction that may occur in nonembedded schemes. We demonstrate that the lack of conservation in partitioned schemes can lead to nonphysical effects and propose conservative additive schemes based on partitioning the fluxes rather than the ordinary differential equations. A variety of SPERK schemes are presented, including an embedded pair suitable for the time evolution of fifth-order weighted nonoscillatory spatial discretizations. Numerical experiments are provided to support the theory.

  17. Bayesian Methods for Medical Test Accuracy

    Directory of Open Access Journals (Sweden)

    Lyle D. Broemeling

    2011-05-01

    Full Text Available Bayesian methods for medical test accuracy are presented, beginning with the basic measures for tests with binary scores: true positive fraction, false positive fraction, positive predictive values, and negative predictive value. The Bayesian approach is taken because of its efficient use of prior information, and the analysis is executed with a Bayesian software package WinBUGS®. The ROC (receiver operating characteristic curve gives the intrinsic accuracy of medical tests that have ordinal or continuous scores, and the Bayesian approach is illustrated with many examples from cancer and other diseases. Medical tests include X-ray, mammography, ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine and tests based on biomarkers, such as blood glucose values for diabetes. The presentation continues with more specialized methods suitable for measuring the accuracies of clinical studies that have verification bias, and medical tests without a gold standard. Lastly, the review is concluded with Bayesian methods for measuring the accuracy of the combination of two or more tests.

  18. Bayesian Methods for Radiation Detection and Dosimetry

    CERN Document Server

    Groer, Peter G

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...

  19. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    inference algorithms based on the proposed prior representation for sparse channel estimation in orthogonal frequency-division multiplexing receivers. The inference algorithms, which are mainly obtained from variational Bayesian methods, exploit the underlying sparse structure of wireless channel responses......This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development of...... Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation of...

  20. Self-complementary plane partitions by Proctor's minuscule method

    OpenAIRE

    Kuperberg, Greg

    1994-01-01

    A method of Proctor [European J. Combin. 5 (1984), no. 4, 331-350] realizes the set of arbitrary plane partitions in a box and the set of symmetric plane partitions as bases of linear representations of Lie groups. We extend this method by realizing transposition and complementation of plane partitions as natural linear transformations of the representations, thereby enumerating symmetric plane partitions, self-complementary plane partitions, and transpose-complement plane partitions in a new...

  1. Advanced Bayesian Method for Planetary Surface Navigation

    Science.gov (United States)

    Center, Julian

    2015-01-01

    Autonomous Exploration, Inc., has developed an advanced Bayesian statistical inference method that leverages current computing technology to produce a highly accurate surface navigation system. The method combines dense stereo vision and high-speed optical flow to implement visual odometry (VO) to track faster rover movements. The Bayesian VO technique improves performance by using all image information rather than corner features only. The method determines what can be learned from each image pixel and weighs the information accordingly. This capability improves performance in shadowed areas that yield only low-contrast images. The error characteristics of the visual processing are complementary to those of a low-cost inertial measurement unit (IMU), so the combination of the two capabilities provides highly accurate navigation. The method increases NASA mission productivity by enabling faster rover speed and accuracy. On Earth, the technology will permit operation of robots and autonomous vehicles in areas where the Global Positioning System (GPS) is degraded or unavailable.

  2. Bayesian Methods for Radiation Detection and Dosimetry

    International Nuclear Information System (INIS)

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model

  3. The first complete mitochondrial genome from Bostrychus genus (Bostrychus sinensis) and partitioned Bayesian analysis of Eleotridae fish phylogeny

    Indian Academy of Sciences (India)

    Tao Wei; Xiao Xiao Jin; Tian Jun Xu

    2013-08-01

    To understand the phylogenetic position of Bostrychus sinensis in Eleotridae and the phylogenetic relationships of the family, we determined the nucleotide sequence of the mitochondrial (mt) genome of Bostrychus sinensis. It is the first complete mitochondrial genome sequence of Bostrychus genus. The entire mtDNA sequence was 16508 bp in length with a standard set of 13 protein-coding genes, 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs) and a noncoding control region. The mitochondrial genome of B. sinensis had common features with those of other bony fishes with respect to gene arrangement, base composition, and tRNA structures. Phylogenetic hypotheses within Eleotridae fish have been controversial at the genus level. We used the mitochondrial cytochrome b (cytb) gene sequence to examine phylogenetic relationships of Eleotridae by using partitioned Bayesian method. When the specific models and parameter estimates were presumed for partitioning the total data, the harmonic mean –lnL was improved. The phylogenetic analysis supported the monophyly of Hypseleotris and Gobiomorphs. In addition, the Bostrychus were most closely related to Ophiocara, and the Philypnodon is also the sister to Microphlypnus, based on the current datasets. Further, extensive taxonomic sampling and more molecular information are needed to confirm the phylogenetic relationships in Eleotridae.

  4. Bayesian methods in risk Assessment

    International Nuclear Information System (INIS)

    The need for a consistent framework for the analysis of large nuclear power plant safety, not provided for by conventional methods of statistics and reliability theory, prompted the present article. The qualification of uncertainties depends crucially on the particular way that the assessor views probability. Two principal schools of thought are the subjectivistic approach advocated by de Finetti, and the frequentist school advocated by von Mises. The point of view of the author is the subjective one. The foundations of the proposed approach and a discussion of several topics relevant to risk assessment follow with applications to the specialization of generic data for site-specific risk studies, the assessment of frequency of fires in nuclear plant compartments, and the use of expert opinion in risk assessments

  5. parallelMCMCcombine: an R package for bayesian methods for big data and analytics.

    Directory of Open Access Journals (Sweden)

    Alexey Miroshnikov

    Full Text Available Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.

  6. Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

    OpenAIRE

    Zhu, Weixuan

    2016-01-01

    The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...

  7. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  8. New parallel SOR method by domain partitioning

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)

    1996-12-31

    In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.

  9. Bayesian individualization via sampling-based methods.

    Science.gov (United States)

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  10. Bayesian non- and semi-parametric methods and applications

    CERN Document Server

    Rossi, Peter

    2014-01-01

    This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number

  11. Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.

    Science.gov (United States)

    Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka

    2014-02-01

    In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain. PMID:24246289

  12. Bayesian method for system reliability assessment of overlapping pass/fail data

    Institute of Scientific and Technical Information of China (English)

    Zhipeng Hao; Shengkui Zeng; Jianbin Guo

    2015-01-01

    For high reliability and long life systems, system pass/fail data are often rare. Integrating lower-level data, such as data drawn from the subsystem or component pass/fail testing, the Bayesian analysis can improve the precision of the system reli-ability assessment. If the multi-level pass/fail data are overlapping, one chal enging problem for the Bayesian analysis is to develop a likelihood function. Since the computation burden of the existing methods makes them infeasible for multi-component systems, this paper proposes an improved Bayesian approach for the system reliability assessment in light of overlapping data. This approach includes three steps: fristly searching for feasible paths based on the binary decision diagram, then screening feasible points based on space partition and constraint decomposition, and final y sim-plifying the likelihood function. An example of a satel ite rol ing control system demonstrates the feasibility and the efficiency of the proposed approach.

  13. Computational methods for Bayesian model choice

    OpenAIRE

    Robert, Christian P.; Wraith, Darren

    2009-01-01

    In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.

  14. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Directory of Open Access Journals (Sweden)

    Alexander Tilley

    Full Text Available The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15N and (13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15N values and greater δ(13C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15N ≈ 2.7‰ and Δ(13C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions.

  15. Diet reconstruction and resource partitioning of a Caribbean marine mesopredator using stable isotope bayesian modelling.

    Science.gov (United States)

    Tilley, Alexander; López-Angarita, Juliana; Turner, John R

    2013-01-01

    The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15)N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15)N values and greater δ(13)C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15)N ≈ 2.7‰ and Δ(13)C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions. PMID:24236144

  16. Bayesian regularisation methods in a hybrid MLP-HMM system.

    OpenAIRE

    Renals, Steve; MacKay, David

    1993-01-01

    We have applied Bayesian regularisation methods to multi-layer percepuon (MLP) training in the context of a hybrid MLP-HMM (hidden Markov model) continuous speech recognition system. The Bayesian framework adopted here allows an objective setting of the regularisation parameters, according to the training data. Experiments have been carried out on the ARPA Resource Management database.

  17. A Bayesian method for microseismic source inversion

    Science.gov (United States)

    Pugh, D. J.; White, R. S.; Christie, P. A. F.

    2016-08-01

    Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability

  18. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  19. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  20. Genomic prediction and genomic variance partitioning of daily and residual feed intake in pigs using Bayesian Power Lasso models

    DEFF Research Database (Denmark)

    Do, Duy Ngoc; Janss, Luc L G; Strathe, Anders B;

    Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL had similar accuracy and bias as GBLUP but power parameters had...

  1. An Efficient Bayesian Iterative Method for Solving Linear Systems

    Institute of Scientific and Technical Information of China (English)

    Deng DING; Kin Sio FONG; Ka Hou CHAN

    2012-01-01

    This paper concerns with the statistical methods for solving general linear systems.After a brief review of Bayesian perspective for inverse problems,a new and efficient iterative method for general linear systems from a Bayesian perspective is proposed.The convergence of this iterative method is proved,and the corresponding error analysis is studied.Finally,numerical experiments are given to support the efficiency of this iterative method,and some conclusions are obtained.

  2. HEURISTIC DISCRETIZATION METHOD FOR BAYESIAN NETWORKS

    Directory of Open Access Journals (Sweden)

    Mariana D.C. Lima

    2014-01-01

    Full Text Available Bayesian Network (BN is a classification technique widely used in Artificial Intelligence. Its structure is a Direct Acyclic Graph (DAG used to model the association of categorical variables. However, in cases where the variables are numerical, a previous discretization is necessary. Discretization methods are usually based on a statistical approach using the data distribution, such as division by quartiles. In this article we present a discretization using a heuristic that identifies events called peak and valley. Genetic Algorithm was used to identify these events having the minimization of the error between the estimated average for BN and the actual value of the numeric variable output as the objective function. The BN has been modeled from a database of Bit’s Rate of Penetration of the Brazilian pre-salt layer with 5 numerical variables and one categorical variable, using the proposed discretization and the division of the data by the quartiles. The results show that the proposed heuristic discretization has higher accuracy than the quartiles discretization.

  3. Application of Bayesian Network Learning Methods to Land Resource Evaluation

    Institute of Scientific and Technical Information of China (English)

    HUANG Jiejun; HE Xiaorong; WAN Youchuan

    2006-01-01

    Bayesian network has a powerful ability for reasoning and semantic representation, which combined with qualitative analysis and quantitative analysis, with prior knowledge and observed data, and provides an effective way to deal with prediction, classification and clustering. Firstly, this paper presented an overview of Bayesian network and its characteristics, and discussed how to learn a Bayesian network structure from given data, and then constructed a Bayesian network model for land resource evaluation with expert knowledge and the dataset. The experimental results based on the test dataset are that evaluation accuracy is 87.5%, and Kappa index is 0.826. All these prove the method is feasible and efficient, and indicate that Bayesian network is a promising approach for land resource evaluation.

  4. A novel multimode process monitoring method integrating LDRSKM with Bayesian inference

    Institute of Scientific and Technical Information of China (English)

    Shi-jin REN; Yin LIANG; Xiang-jun ZHAO; Mao-yun YANG

    2015-01-01

    A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and non-local geometric information of the data and generalized linear discriminant analysis to provide a better and more meaningful data partition. LDRSKM can perform clustering and subspace selection simultaneously, enhancing the separability of data residing in different clusters. With the data partition obtained, kernel support vector data description (KSVDD) is used to establish the monitoring statistics and control limits. Two Bayesian inference based global fault detection indicators are then developed using the local monitoring results associated with principal and residual subspaces. Based on clustering analysis, Bayesian inference and mani-fold learning methods, the within and cross-mode correlations, and local geometric information can be exploited to enhance monitoring performances for nonlinear and non-Gaussian processes. The effectiveness and efficiency of the proposed method are evaluated using the Tennessee Eastman benchmark process.

  5. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...

  6. Advanced Bayesian Methods for Lunar Surface Navigation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...

  7. A survey of current Bayesian gene mapping method

    OpenAIRE

    Molitor John; Marjoram Paul; Conti David; Thomas Duncan

    2004-01-01

    Abstract Recently, there has been much interest in the use of Bayesian statistical methods for performing genetic analyses. Many of the computational difficulties previously associated with Bayesian analysis, such as multidimensional integration, can now be easily overcome using modern high-speed computers and Markov chain Monte Carlo (MCMC) methods. Much of this new technology has been used to perform gene mapping, especially through the use of multi-locus linkage disequilibrium techniques. ...

  8. Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology

    CERN Document Server

    Hortúa, Héctor J

    2014-01-01

    These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.

  9. A new method for counting trees with vertex partition

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A direct and elementary method is provided in this paper for counting trees with vertex partition instead of recursion, generating function, functional equation, Lagrange inversion, and matrix methods used before.

  10. Algebraic methods for evaluating integrals In Bayesian statistics

    OpenAIRE

    Lin, Shaowei

    2011-01-01

    The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning and computational biology. Following the recent success of algebraic statistics in frequentist inference and inspired by Watanabe's foundational approach to singular learning theory, the goal of this dissertation is to study algebraic, geometric and combinatorial methods for computing Bayesian integrals effectively, and to explor...

  11. The partition problem: case studies in Bayesian screening for time-varying model structure

    OpenAIRE

    Liu, Zesong; Windle, Jesse; Scott, James G.

    2011-01-01

    This paper presents two case studies of data sets where the main inferential goal is to characterize time-varying patterns in model structure. Both of these examples are seen to be general cases of the so-called "partition problem," where auxiliary information (in this case, time) defines a partition over sample space, and where different models hold for each element of the partition. In the first case study, we identify time-varying graphical structure in the covariance matrix of asset retur...

  12. The bootstrap and Bayesian bootstrap method in assessing bioequivalence

    International Nuclear Information System (INIS)

    Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.

  13. Baltic sea algae analysis using Bayesian spatial statistics methods

    Directory of Open Access Journals (Sweden)

    Eglė Baltmiškytė

    2013-03-01

    Full Text Available Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential are built to estimate the best model fitting for algae quantity prediction. Unknown model parameters are estimated and Bayesian kriging prediction posterior distribution is computed in OpenBUGS modeling environment by using Bayesian spatial statistics methods.

  14. Diet Reconstruction and Resource Partitioning of a Caribbean Marine Mesopredator Using Stable Isotope Bayesian Modelling

    OpenAIRE

    Tilley, Alexander; López-Angarita, Juliana; Turner, John R.

    2013-01-01

    The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of 15N and 13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinu...

  15. Diet Reconstruction and Resource Partitioning of a Caribbean Marine Mesopredator Using Stable Isotope Bayesian Modelling

    OpenAIRE

    Alexander Tilley; Juliana López-Angarita; Turner, John R.

    2013-01-01

    The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carchar...

  16. Constructing Bayesian formulations of sparse kernel learning methods.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2005-01-01

    We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387

  17. Modelling LGD for unsecured retail loans using Bayesian methods

    OpenAIRE

    Katarzyna Bijak; Thomas, Lyn C

    2015-01-01

    Loss Given Default (LGD) is the loss borne by the bank when a customer defaults on a loan. LGD for unsecured retail loans is often found difficult to model. In the frequentist (non-Bayesian) two-step approach, two separate regression models are estimated independently, which can be considered potentially problematic when trying to combine them to make predictions about LGD. The result is a point estimate of LGD for each loan. Alternatively, LGD can be modelled using Bayesian methods. In the B...

  18. Approximation methods for the partition functions of anharmonic systems

    International Nuclear Information System (INIS)

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations

  19. Symplectic trigonometrically fitted partitioned Runge-Kutta methods

    International Nuclear Information System (INIS)

    The numerical integration of Hamiltonian systems is considered in this Letter. Trigonometrically fitted symplectic partitioned Runge-Kutta methods of second, third and fourth orders are constructed. The methods are tested on the numerical integration of the harmonic oscillator, the two body problem and an orbital problem studied by Stiefel and Bettis

  20. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  1. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    Science.gov (United States)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M.

    2013-08-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  2. Application of an efficient Bayesian discretization method to biomedical data

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2011-07-01

    Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.

  3. Methods for Bayesian power spectrum inference with galaxy surveys

    CERN Document Server

    Jasche, Jens

    2013-01-01

    We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves over previous Bayesian methods by performing a joint inference of the three dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate sub samples. The method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal to noise regimes by using a determini...

  4. Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods

    Science.gov (United States)

    Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari

    2013-06-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.

  5. Gas/Aerosol partitioning: a simplified method for global modeling

    NARCIS (Netherlands)

    Metzger, S.M.

    2001-01-01

    The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partl

  6. Bayesian Biclustering on Discrete Data: Variable Selection Methods

    OpenAIRE

    Guo, Lei

    2013-01-01

    Biclustering is a technique for clustering rows and columns of a data matrix simultaneously. Over the past few years, we have seen its applications in biology-related fields, as well as in many data mining projects. As opposed to classical clustering methods, biclustering groups objects that are similar only on a subset of variables. Many biclustering algorithms on continuous data have emerged over the last decade. In this dissertation, we will focus on two Bayesian biclustering algorithms we...

  7. A nonparametric Bayesian method for estimating a response function

    OpenAIRE

    Brown, Scott; Meeden, Glen

    2012-01-01

    Consider the problem of estimating a response function which depends upon a non-stochastic independent variable under our control. The data are independent Bernoulli random variables where the probabilities of success are given by the response function at the chosen values of the independent variable. Here we present a nonparametric Bayesian method for estimating the response function. The only prior information assumed is that the response function can be well approximated by a mixture of st...

  8. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    _cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown......, and exact and optimisation-based heuristic solution methods for the model are described. All these methods are centered around the wellknown column generation technique. Di_erent practical applications of crew scheduling are presented, and some of these applications are considered in detail in four included...

  9. Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method

    Institute of Scientific and Technical Information of China (English)

    WANG Wei(王玮); CAI LianHong(蔡莲红)

    2003-01-01

    Bayesian approach is an important method in statistics. The Bayesian belief network is a powerful knowledge representation and reasoning tool under the conditions of uncertainty.It is a graphics model that encodes probabilistic relationships among variables of interest. In this paper, an approach to Bayesian network construction is given for discovering the Chinese linguistic parameter relationship in the corpus.

  10. Methods for Bayesian Power Spectrum Inference with Galaxy Surveys

    Science.gov (United States)

    Jasche, Jens; Wandelt, Benjamin D.

    2013-12-01

    We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves upon previous Bayesian methods by performing a joint inference of the three-dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases, and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate subsamples. This method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal-to-noise regimes by using a deterministic reversible jump algorithm. This approach reduces the correlation length of the sampler by several orders of magnitude, turning the otherwise numerically unfeasible problem of joint parameter exploration into a numerically manageable task. We test our method on an artificial mock galaxy survey, emulating characteristic features of the Sloan Digital Sky Survey data release 7, such as its survey geometry and luminosity-dependent biases. These tests demonstrate the numerical feasibility of our large scale Bayesian inference frame work when the parameter space has millions of dimensions. This method reveals and correctly treats the anti-correlation between bias amplitudes and power spectrum, which are not taken into account in current approaches to power spectrum estimation, a 20% effect across large ranges in k space. In addition, this method results in constrained realizations of density fields obtained without assuming the power spectrum or bias parameters

  11. PARALLEL COMPOUND METHODS FOR SOLVING PARTITIONED STIFF SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li-rong Chen; De-gui Liu

    2001-01-01

    This paper deals with the solution of partitioned systems of nonlinear stiff differential equations. Given a differential system, the user may specify some equations to be stiff and others to be nonstiff. For the numerical solution of such a system Parallel Compound Methods(PCMs) are studied. Nonstiff equations are integrated by a parallel explicit RK method while a parallel Rosenbrock method is used for the stiff part of the system. Their order conditions, their convergence and their numerical stability are discussed,and the numerical tests are conducted on a personal computer and a parallel computer.

  12. Distance and extinction determination for APOGEE stars with Bayesian method

    Science.gov (United States)

    Wang, Jianling; Shi, Jianrong; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James

    2016-08-01

    Using a Bayesian technology, we derived distances and extinctions for over 100 000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from Two Micron All-Sky Survey, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC and Strömgren survey for Asteroseismology and Galactic Archaeology catalogues. These comparisons covers four orders of magnitude in the distance scale from 0.02 to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2 per cent to +3.6 per cent, and the dispersion ranges from 15 per cent to 25 per cent. The extinctions towards all stars are also derived and compared with those from several other independent methods: the Rayleigh-Jeans Colour Excess (RJCE) method, Gonzalez's 2D extinction map, as well as 3D extinction maps and models. The comparisons reveal that, overall, estimated extinctions agree very well, but RJCE tends to overestimate extinctions for cool stars and objects with low log g.

  13. Space-partition method for the variance-based sensitivity analysis: Optimal partition scheme and comparative study

    International Nuclear Information System (INIS)

    Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one

  14. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  15. Distance and extinction determination for APOGEE stars with Bayesian method

    CERN Document Server

    Wang, Jianling; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James

    2016-01-01

    Using a Bayesian technology we derived distances and extinctions for over 100,000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from 2MASS, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC (Rodrigues et al. 2014) and SAGA Catalogues (Casagrande et al. 2014). These comparisons covers four orders of magnitude in the distance scale from 0.02 kpc to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2% to +3.6%, and the dispersion ranges from 15% to 25%. The extinctions toward all stars are also derived and compared wi...

  16. Internal dosimetry of uranium isotopes using bayesian inference methods

    International Nuclear Information System (INIS)

    A group of personnel at Los Alamos National Laboratory is routinely monitored for the presence of uranium isotopes by urine bioassay. Samples are analysed by alpha spectroscopy, and the results are examined for evidence of an intake of uranium. Because the measurement uncertainties are often comparable to the quantities of material we wish to detect, statistical considerations are crucial for the proper interpretation of the data. The problem is further complicated by the significant, but highly non-uniform, presence of uranium in local drinking water and, in some cases, food supply. Software originally developed for internal dosimetry of plutonium has been adapted to the problem of uranium dosimetry. The software uses an unfolding algorithm to calculate an approximate Bayesian solution to the problem of characterising any intakes which may have occurred, given the history of urine bioassay results for each individual in the monitored population. The program uses biokinetic models from ICRP Publications 68 and later, and a prior probability distribution derived empirically from the body of uranium bioassay data collected at Los Alamos over the operating history of the Laboratory. For each individual, the software creates a posterior probability distribution of intake quantity and solubility type as a function of time. From this distribution, estimates are made of the cumulative committed dose (CEDE) to each individual. Results of the method are compared with those obtained using an earlier classical (non-Bayesian) algorithm for uranium dosimetry. We also discuss the problem of distinguishing occupational intakes from intake of environmental uranium, within a Bayesian framework. (author)

  17. ObStruct: a method to objectively analyse factors driving population structure using Bayesian ancestry profiles.

    Directory of Open Access Journals (Sweden)

    Velimir Gayevskiy

    Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.

  18. Computational Methods for Domain Partitioning of Protein Structures

    Science.gov (United States)

    Veretnik, Stella; Shindyalov, Ilya

    Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.

  19. Bayesian Monte Carlo method for nuclear data evaluation

    International Nuclear Information System (INIS)

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)

  20. ANALYSIS OF CLIQUE BY MATRIX FACTORIZATION AND PARTITION METHODS

    Directory of Open Access Journals (Sweden)

    Raghunath Kar

    2011-10-01

    Full Text Available In real life clustering of high dimensional data is a big problem. Tofind out the dense regions from increasing dimensions is one of them.We have already studied the clustering techniques of low dimensionaldata sets like k-means, k-mediod, BIRCH, CLARANS, CURE, DBScan, PAM etc. If a region is dense then it consists with number of data points with a minimum support of input parameter ø other wise itcannot take into clustering. So in this approach we have implementedCLIQUE to find out the clusters from multidimensional data sets. Indimension growth subspace clustering the clustering process start atsingle dimensional subspaces and grows upward to higher dimensionalones. It is a partition method where each dimension divided like a grid structure. In this paper the elimination of redundant objects from the regions by matrix factorization and partition method are implemented. The comparisons between CLIQUES with these two methods are studied. The redundant data point belongs to which region to form a cluster is also studied.

  1. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  2. Common before-after accident study on a road site: a low-informative Bayesian method

    OpenAIRE

    Brenac, Thierry

    2009-01-01

    This note aims at providing a Bayesian methodological basis for routine before-after accident studies, often applied to a single road site, and in conditions of limited resources in terms of time and expertise. Methods: A low-informative Bayesian method is proposed for before-after accident studies using a comparison site or group of sites. As compared to conventional statistics, the Bayesian approach is less subject to misuse and misinterpretation by practitioners. The low-informative framew...

  3. Metainference: A Bayesian Inference Method for Heterogeneous Systems

    CERN Document Server

    Bonomi, Massimiliano; Cavalli, Andrea; Vendruscolo, Michele

    2015-01-01

    Modelling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model, and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system populates simultaneously an ensemble of different states and experimental data are measured as averages over such states. To address this problem we present a method, called metainference, that combines Bayesian inference, which is a powerful strategy to deal with errors in experimental measurements, with the maximum entropy principle, which represents a rigorous approach to deal with experimental measurements averaged over multiple states. To illustrate the method we present its application to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to model complex systems with...

  4. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  5. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  6. Chain ladder method: Bayesian bootstrap versus classical bootstrap

    OpenAIRE

    Peters, Gareth W.; Mario V. W\\"uthrich; Shevchenko, Pavel V.

    2010-01-01

    The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. T...

  7. Emulation: A fast stochastic Bayesian method to eliminate model space

    Science.gov (United States)

    Roberts, Alan; Hobbs, Richard; Goldstein, Michael

    2010-05-01

    Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much

  8. A high-resolution direction-of-arrival estimation based on Bayesian method

    Institute of Scientific and Technical Information of China (English)

    HUANG Jianguo; SUN Yi; XU Pu; LU Ying; LIU Kewei

    2004-01-01

    A Bayesian high-resolution direction-of-arrival (DOA) estimator is proposed based on the maximum a posteriori principle. The statistical performance of the Bayesian highresolution DOA estimator is also investigated. Comparison with MUSIC and Maximum likelihood estimator (MLE) shows that the Bayesian method has higher resolution and more accurate estimates for either incoherent or coherent sources. It is also more robust in the case of low SNR.

  9. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided. PMID:26019004

  10. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  11. Partition wall structure in spent fuel storage pool and construction method for the partition wall

    International Nuclear Information System (INIS)

    A partitioning wall for forming cask pits as radiation shielding regions by partitioning inside of a spent fuel storage pool is prepared by covering both surface of a concrete body by shielding metal plates. The metal plate comprises opposed plate units integrated by welding while sandwiching a metal frame as a reinforcing material for the concrete body, the lower end of the units is connected to a floor of a pool by fastening members, and concrete is set while using the metal plate of the units as a frame to form the concrete body. The shielding metal plate has a double walled structure formed by welding a lining plate disposed on the outer surface of the partition wall and a shield plate disposed to the inner side. Then the term for construction can be shortened, and the capacity for storing spent fuels can be increased. (N.H.)

  12. Developments from Programming the Partition Method for a Power Series Expansion

    CERN Document Server

    Kowalenko, Victor

    2012-01-01

    Recently, a novel method based on coding partitions [1]-[4] has been used to derive power series expansions to previously intractable problems. In this method the coefficients at $k$ are determined by summing the contributions made by each partition whose elements sum to $k$. These contributions are found by assigning values to each element and multiplying by an appropriate multinomial factor. This work presents a theoretical framework for the partition method for a power series expansion. To overcome the complexity due to the contributions, a programming methodology is created allowing more general problems to be studied than envisaged originally. The methodology uses the bi-variate recursive central partition (BRCP) algorithm, which is based on a tree-diagram approach to scanning partitions. Its main advantage is that partitions are generated in the multiplicity representation. During the development of the theoretical framework, scanning over partitions was seen as a discrete operation with an operator $L_...

  13. Dirichlet Methods for Bayesian Source Detection in Radio Astronomy Images

    Science.gov (United States)

    Friedlander, A. M.

    2014-02-01

    The sheer volume of data to be produced by the next generation of radio telescopes - exabytes of data on hundreds of millions of objects - makes automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are low surface brightness objects, which are not well found by current automated methods. This thesis explores Bayesian methods for source detection that use Dirichlet or multinomial models for pixel intensity distributions in discretised radio astronomy images. A novel image discretisation method that incorporates uncertainty about how the image should be discretised is developed. Latent Dirichlet allocation - a method originally developed for inferring latent topics in document collections - is used to estimate source and background distributions in radio astronomy images. A new Dirichlet-multinomial ratio, indicating how well a region conforms to a well-specified model of background versus a loosely-specified model of foreground, is derived. Finally, latent Dirichlet allocation and the Dirichlet-multinomial ratio are combined for source detection in astronomical images. The methods developed in this thesis perform source detection well in comparison to two widely-used source detection packages and, importantly, find dim sources not well found by other algorithms.

  14. Domain decomposition by the advancing-partition method for parallel unstructured grid generation

    Science.gov (United States)

    Pirzadeh, Shahyar Z. (Inventor); Banihashemi, legal representative, Soheila (Inventor)

    2012-01-01

    In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.

  15. A Non-Parametric Bayesian Method for Inferring Hidden Causes

    OpenAIRE

    Wood, Frank; Griffiths, Thomas; Ghahramani, Zoubin

    2012-01-01

    We present a non-parametric Bayesian approach to structure learning with hidden causes. Previous Bayesian treatments of this problem define a prior over the number of hidden causes and use algorithms such as reversible jump Markov chain Monte Carlo to move between solutions. In contrast, we assume that the number of hidden causes is unbounded, but only a finite number influence observable variables. This makes it possible to use a Gibbs sampler to approximate the distribution over causal stru...

  16. Metainference: A Bayesian inference method for heterogeneous systems.

    Science.gov (United States)

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  17. ESTIMATE OF THE HYPSOMETRIC RELATIONSHIP WITH NONLINEAR MODELS FITTED BY EMPIRICAL BAYESIAN METHODS

    Directory of Open Access Journals (Sweden)

    Monica Fabiana Bento Moreira

    2015-09-01

    Full Text Available In this paper we propose a Bayesian approach to solve the inference problem with restriction on parameters, regarding to nonlinear models used to represent the hypsometric relationship in clones of Eucalyptus sp. The Bayesian estimates are calculated using Monte Carlo Markov Chain (MCMC method. The proposed method was applied to different groups of actual data from which two were selected to show the results. These results were compared to the results achieved by the minimum square method, highlighting the superiority of the Bayesian approach, since this approach always generate the biologically consistent results for hipsometric relationship.

  18. CEO emotional bias and dividend policy: Bayesian network method

    Directory of Open Access Journals (Sweden)

    Azouzi Mohamed Ali

    2012-10-01

    Full Text Available This paper assumes that managers, investors, or both behave irrationally. In addition, even though scholars have investigated behavioral irrationality from three angles, investor sentiment, investor biases and managerial biases, we focus on the relationship between one of the managerial biases, overconfidence and dividend policy. Previous research investigating the relationship between overconfidence and financial decisions has studied investment, financing decisions and firm values. However, there are only a few exceptions to examine how a managerial emotional bias (optimism, loss aversion and overconfidence affects dividend policies. This stream of research contends whether to distribute dividends or not depends on how managers perceive of the company’s future. I will use Bayesian network method to examine this relation. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some100 Tunisian executives. Our results have revealed that leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its dividend policy choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  19. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  20. The characterization of petroleum contamination in heterogenous media using partitioning tracer method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, E.; Rhee, S.; Park, J. [Seoul National Univ. (Korea, Republic of). Dept. of Civil and Environmental Engineering

    2009-07-01

    A partitioning tracer method for characterizing petroleum contamination in heterogenous media was discussed. The average saturation level of nonaqueous phase liquids (NAPLs) was calculated by comparing the transport of the partitioning tracers to a conservative tracer. The NAPL saturation level represented a continuous value throughout the contaminated site. Experiments were conducted in a 2-D sandbox divided into 4 parts using different-sized sands. Soils were contaminated with a mixture of kerosene and diesel. Partitioning tracer tests were conducted both before and after contamination. A partitioning batch test was conducted to determine the partition coefficient (K) of the tracer between the NAPL and water. Breakthrough curves were obtained, and a retardation factor (R) was calculated. Results of the study showed that the calculated NAPL saturation was in good agreement with determined values. It was concluded that the partitioning tracer test is an accurate method of locating and quantifying NAPLs.

  1. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  2. PREDICTION OF OCTANOL/WATER PARTITION COEFFICIENT OF SELECTED FERROCENE DERIVATIVES USING REKKER METHOD

    OpenAIRE

    R. Ahmedi; T. Lanez

    2015-01-01

    In this work we present a theoretical approach for the determination of octanol/water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ferrocene was confirmed by comparison with known experimental values obtained mainly from literature. The results obtained show that calculated partition coefficien...

  3. Bayesian data augmentation methods for the synthesis of qualitative and quantitative research findings

    OpenAIRE

    Crandell, Jamie L.; Voils, Corrine I.; Chang, YunKyung; Sandelowski, Margarete

    2011-01-01

    The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missin...

  4. Errata: A survey of Bayesian predictive methods for model assessment, selection and comparison

    Directory of Open Access Journals (Sweden)

    Aki Vehtari

    2014-03-01

    Full Text Available Errata for “A survey of Bayesian predictive methods for model assessment, selection and comparison” by A. Vehtari and J. Ojanen, Statistics Surveys, 6 (2012, 142–228. doi:10.1214/12-SS102.

  5. A Bayesian Assignment Method for Ambiguous Bisulfite Short Reads.

    Directory of Open Access Journals (Sweden)

    Hong Tran

    Full Text Available DNA methylation is an epigenetic modification critical for normal development and diseases. The determination of genome-wide DNA methylation at single-nucleotide resolution is made possible by sequencing bisulfite treated DNA with next generation high-throughput sequencing. However, aligning bisulfite short reads to a reference genome remains challenging as only a limited proportion of them (around 50-70% can be aligned uniquely; a significant proportion, known as multireads, are mapped to multiple locations and thus discarded from downstream analyses, causing financial waste and biased methylation inference. To address this issue, we develop a Bayesian model that assigns multireads to their most likely locations based on the posterior probability derived from information hidden in uniquely aligned reads. Analyses of both simulated data and real hairpin bisulfite sequencing data show that our method can effectively assign approximately 70% of the multireads to their best locations with up to 90% accuracy, leading to a significant increase in the overall mapping efficiency. Moreover, the assignment model shows robust performance with low coverage depth, making it particularly attractive considering the prohibitive cost of bisulfite sequencing. Additionally, results show that longer reads help improve the performance of the assignment model. The assignment model is also robust to varying degrees of methylation and varying sequencing error rates. Finally, incorporating prior knowledge on mutation rate and context specific methylation level into the assignment model increases inference accuracy. The assignment model is implemented in the BAM-ABS package and freely available at https://github.com/zhanglabvt/BAM_ABS.

  6. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  7. Symplectic Partitioned Runge-Kutta Methods with Minimum Phase Lag - Case of 5 Stages

    International Nuclear Information System (INIS)

    In this work we consider explicit Symplectic Partitioned Runge-Kutta methods (SPRK) with five stages for problems with separable Hamiltonian. We construct a new method with constant coefficients third algebraic order and eighth phase-lag order.

  8. Application of Bayesian methods to Dark Matter searches with XENON100

    International Nuclear Information System (INIS)

    The XENON100 experiment located in the LNGS Underground Lab in Italy, aims at the direct detection of WIMP dark matter (DM). It is currently the most sensitive detector for spin-independent WIMP-nucleus interaction. The DM analysis of XENON100 data is currently performed with a profile likelihood method after several cuts and data selection methods have been applied. A different model for the statistical analysis of data is the Bayesian interpretation. In the Bayesian approach to probability a prior probability (state of knowledge) is defined and updated for new sets of data to reject or accept a hypothesis. As an alternative approach a framework is being developed to implement Bayesian reasoning in the analysis. For this task the ''Bayesian Analysis Toolkit (BAT)'' will be used. Different models have to be implemented to identify background and (if there is a discovery) signal. We report on the current status of this work.

  9. Updating reliability data using feedback analysis: feasibility of a Bayesian subjective method

    International Nuclear Information System (INIS)

    For years, EDF has used Probabilistic Safety Assessment to evaluate a global indicator of the safety of its nuclear power plants and to optimize the performance while ensuring a certain safety level. Therefore, robustness and relevancy of PSA are very important. That is the reason why EDF wants to improve the relevancy of the reliability parameters used in these models. This article aims to propose a Bayesian approach to build PSA parameters when feedback data is not large enough to use the frequentist method. Our method is called subjective because its purpose is to give engineers pragmatic criteria to apply Bayesian in a controlled and consistent way. Using Bayesian is quite common for example in the United States, because the nuclear power plants are less standardized. Bayesian is often used with generic data as prior. So we have to adapt the general methodology within EDF context. (authors)

  10. Review of bayesian statistical analysis methods for cytogenetic radiation biodosimetry, with a practical example

    International Nuclear Information System (INIS)

    Classical methods of assessing the uncertainty associated with radiation doses estimated using cytogenetic techniques are now extremely well defined. However, several authors have suggested that a Bayesian approach to uncertainty estimation may be more suitable for cytogenetic data, which are inherently stochastic in nature. The Bayesian analysis framework focuses on identification of probability distributions (for yield of aberrations or estimated dose), which also means that uncertainty is an intrinsic part of the analysis, rather than an 'afterthought'. In this paper Bayesian, as well as some more advanced classical, data analysis methods for radiation cytogenetics are reviewed that have been proposed in the literature. A practical overview of Bayesian cytogenetic dose estimation is also presented, with worked examples from the literature. (authors)

  11. Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.

    Science.gov (United States)

    Reyes Santos, Joost; Haimes, Yacov Y

    2004-06-01

    The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model

  12. Bayesian Methods for Neural Networks and Related Models

    OpenAIRE

    Titterington, D.M.

    2004-01-01

    Models such as feed-forward neural networks and certain other structures investigated in the computer science literature are not amenable to closed-form Bayesian analysis. The paper reviews the various approaches taken to overcome this difficulty, involving the use of Gaussian approximations, Markov chain Monte Carlo simulation routines and a class of non-Gaussian but “deterministic” approximations called variational approximations.

  13. Construction of symplectic (partitioned) Runge-Kutta methods with continuous stage

    OpenAIRE

    Tang, Wensheng; Lang, Guangming; Luo, Xuqiong

    2015-01-01

    Hamiltonian systems are one of the most important class of dynamical systems with a geometric structure called symplecticity and the numerical algorithms which can preserve such geometric structure are of interest. In this article we study the construction of symplectic (partitioned) Runge-Kutta methods with continuous stage, which provides a new and simple way to construct symplectic (partitioned) Runge-Kutta methods in classical sense. This line of construction of symplectic methods relies ...

  14. Bayesian network modeling method based on case reasoning for emergency decision-making

    Directory of Open Access Journals (Sweden)

    XU Lei

    2013-06-01

    Full Text Available Bayesian network has the abilities of probability expression, uncertainty management and multi-information fusion.It can support emergency decision-making, which can improve the efficiency of decision-making.Emergency decision-making is highly time sensitive, which requires shortening the Bayesian Network modeling time as far as possible.Traditional Bayesian network modeling methods are clearly unable to meet that requirement.Thus, a Bayesian network modeling method based on case reasoning for emergency decision-making is proposed.The method can obtain optional cases through case matching by the functions of similarity degree and deviation degree.Then,new Bayesian network can be built through case adjustment by case merging and pruning.An example is presented to illustrate and test the proposed method.The result shows that the method does not have a huge search space or need sample data.The only requirement is the collection of expert knowledge and historical case models.Compared with traditional methods, the proposed method can reuse historical case models, which can reduce the modeling time and improve the efficiency.

  15. A survey of Bayesian predictive methods for model assessment, selection and comparison

    Directory of Open Access Journals (Sweden)

    Aki Vehtari

    2012-01-01

    Full Text Available To date, several methods exist in the statistical literature formodel assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methodsare based are not always clearly stated in the original articles, however.The aim of this survey is to provide a unified review of Bayesian predictivemodel assessment and selection methods, and of methods closely related tothem. We review the various assumptions that are made in this context anddiscuss the connections between different approaches, with an emphasis onhow each method approximates the expected utility of using a Bayesianmodel for the purpose of predicting future data.

  16. Hardware-software partitioning for the design of system on chip by neural network optimization method

    Science.gov (United States)

    Pan, Zhongliang; Li, Wei; Shao, Qingyi; Chen, Ling

    2011-12-01

    In the design procedure of system on chip (SoC), it is needed to make use of hardware-software co-design technique owing to the great complexity of SoC. One of main steps in hardware-software co-design is how to carry out the partitioning of a system into hardware and software components. The efficient approaches for hardware-software partitioning can achieve good system performance, which is superior to the techniques that use software only or use hardware only. In this paper, a method based on neural networks is presented for the hardware-software partitioning of system on chip. The discrete Hopfield neural networks corresponding to the problem of hardware-software partitioning is built, the states of neural neurons are able to represent whether the required components or functionalities are to be implemented in hardware or software. An algorithm based on the principle of simulated annealing is designed, which can be used to compute the minimal energy states of neural networks, therefore the optimal partitioning schemes are obtained. The experimental results show that the hardware-software partitioning method proposed in this paper can obtain the near optimal partitioning for a lot of example circuits.

  17. Analyzing bioassay data using Bayesian methods-A primer

    International Nuclear Information System (INIS)

    The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level kα-Lc/σ0, where σ0 is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times σ0 rather than approximately two times σ0, as in classical statistics, would often seem a better choice for the decision level

  18. Bayesian methods for the conformational classification of eight-membered rings

    DEFF Research Database (Denmark)

    Pérez, J.; Nolsøe, Kim; Kessler, M.;

    2005-01-01

    Two methods for the classification of eight-membered rings based on a Bayesian analysis are presented. The two methods share the same probabilistic model for the measurement of torsion angles, but while the first method uses the canonical forms of cyclooctane and, given an empirical sequence of e...... Structural Database (CSD)....

  19. Recursive method for the Nekrasov partition function for classical Lie groups

    International Nuclear Information System (INIS)

    The Nekrasov partition function for supersymmetric gauge theories with general Lie groups is, so far, not known in a closed form, while there is a definition in terms of the integral. In this paper, as an intermediate step to derive the closed form, we give a recursion formula among partition functions, which can be derived from the integral. We apply the method to a toy model that reflects the basic structure of partition functions for BCD-type Lie groups and obtain a closed expression for the factor associated with the generalized Young diagram

  20. Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs

    Directory of Open Access Journals (Sweden)

    Jeffrey O. Oseh (M.Sc.

    2015-04-01

    Full Text Available Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Bayesian Belief Network Method employed were validated and tested by unseen data to determine their accuracy and trend stability and were also compared with the findings obtained from Scaling equations. The obtained Bayesian Belief Network results indicated that the method showed an improved performance of predicting the amount of asphaltene precipitated in light oil reservoirs thus reducing the number of experiments required.

  1. CEO Emotional Intelligence and Firms’ Financial Policies. Bayesian Network Method

    Directory of Open Access Journals (Sweden)

    Mohamed Ali Azouzi

    2014-03-01

    Full Text Available The aim of this paper is to explore the determinants of firms’ financial policies according to the manager’s psychological characteristics. More specifically, it examines the links between emotional intelligence, decision biases and the effectiveness of firms’ financial policies. The article finds that the main cause of an organization’s problems is the CEO’s emotional intelligence level. We introduce an approach based on Bayesian network techniques with a series of semi-directive interviews. The research paper represents an original approach because it characterizes behavioral corporate policy choices in emerging markets. To the best of our knowledge, this is the first study in the Tunisian context to explore this area of research. Our results show that Tunisian leaders adjust their decisions (on investments and distributions to minimize the risk of loss of compensation or reputation. They opt for decisions that minimize agency costs, transaction costs, and cognitive costs.

  2. Comparison between standard unfolding and Bayesian methods in Bonner spheres neutron spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Medkour Ishak-Boushaki, G., E-mail: gmedkour@yahoo.com [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria); Allab, M. [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria)

    2012-10-11

    This paper compares the use of both standard unfolding and Bayesian methods to analyze data extracted from neutron spectrometric measurements with a view to deriving some integral quantities characterizing a neutron field. We consider, as an example, the determination of the total neutron fluence and dose in the vicinity of an Am-Be source from Bonner spheres measurements. It is shown that the Bayesian analysis provides a rigorous estimation of these quantities and their correlated uncertainties and overcomes difficulties encountered in the standard unfolding methods.

  3. Comparison between standard unfolding and Bayesian methods in Bonner spheres neutron spectrometry

    International Nuclear Information System (INIS)

    This paper compares the use of both standard unfolding and Bayesian methods to analyze data extracted from neutron spectrometric measurements with a view to deriving some integral quantities characterizing a neutron field. We consider, as an example, the determination of the total neutron fluence and dose in the vicinity of an Am–Be source from Bonner spheres measurements. It is shown that the Bayesian analysis provides a rigorous estimation of these quantities and their correlated uncertainties and overcomes difficulties encountered in the standard unfolding methods.

  4. An overview of component qualification using Bayesian statistics and energy methods.

    Energy Technology Data Exchange (ETDEWEB)

    Dohner, Jeffrey Lynn

    2011-09-01

    The below overview is designed to give the reader a limited understanding of Bayesian and Maximum Likelihood (MLE) estimation; a basic understanding of some of the mathematical tools to evaluate the quality of an estimation; an introduction to energy methods and a limited discussion of damage potential. This discussion then goes on to presented a limited presentation as to how energy methods and Bayesian estimation are used together to qualify components. Example problems with solutions have been supplied as a learning aid. Bold letters are used to represent random variables. Un-bolded letter represent deterministic values. A concluding section presents a discussion of attributes and concerns.

  5. A novel Bayesian imaging method for probabilistic delamination detection of composite materials

    International Nuclear Information System (INIS)

    A probabilistic framework for location and size determination for delamination in carbon–carbon composites is proposed in this paper. A probability image of delaminated area using Lamb wave-based damage detection features is constructed with the Bayesian updating technique. First, the algorithm for the probabilistic delamination detection framework using the proposed Bayesian imaging method (BIM) is presented. Next, a fatigue testing setup for carbon–carbon composite coupons is described. The Lamb wave-based diagnostic signal is then interpreted and processed. Next, the obtained signal features are incorporated in the Bayesian imaging method for delamination size and location detection, as well as the corresponding uncertainty bounds prediction. The damage detection results using the proposed methodology are compared with x-ray images for verification and validation. Finally, some conclusions are drawn and suggestions made for future works based on the study presented in this paper. (paper)

  6. Localized operator partitioning method for electronic excitation energies in the time-dependent density functional formalism

    CERN Document Server

    Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F

    2016-01-01

    We extend the localized operator partitioning method (LOPM) [J. Nagesh, A.F. Izmaylov, and P. Brumer, J. Chem. Phys. 142, 084114 (2015)] to the time-dependent density functional theory (TD-DFT) framework to partition molecular electronic energies of excited states in a rigorous manner. A molecular fragment is defined as a collection of atoms using Stratman-Scuseria-Frisch atomic partitioning. A numerically efficient scheme for evaluating the fragment excitation energy is derived employing a resolution of the identity to preserve standard one- and two-electron integrals in the final expressions. The utility of this partitioning approach is demonstrated by examining several excited states of two bichromophoric compounds: 9-((1-naphthyl)-methyl)-anthracene and 4-((2-naphthyl)-methyl)-benzaldehyde. The LOPM is found to provide nontrivial insights into the nature of electronic energy localization that are not accessible using simple density difference analysis.

  7. PREDICTION OF OCTANOL/WATER PARTITION COEFFICIENT OF SELECTED FERROCENE DERIVATIVES USING REKKER METHOD

    Directory of Open Access Journals (Sweden)

    R. Ahmedi

    2015-07-01

    Full Text Available In this work we present a theoretical approach for the determination of octanol/water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ferrocene was confirmed by comparison with known experimental values obtained mainly from literature. The results obtained show that calculated partition coefficients are in good agreement with experimental values. For estimation of the octanol/water partition coefficients of the selected compounds, the average absolute error of log P is 0.13, and The correlation coefficient is  R2 = 0.966.

  8. A method for partitioning cadmium bioaccumulated in small aquatic organisms

    Energy Technology Data Exchange (ETDEWEB)

    Siriwardena, S.N.; Rana, K.J.; Baird, D.J. [Univ. of Stirling (United Kingdom). Institute of Aquaculture

    1995-09-01

    A series of laboratory experiments was conducted to evaluate bioaccumulation and surface adsorption of aqueous cadmium (Cd) by sac-fry of the African tilapia Oreochromis niloticus. In the first experiment, the design consisted of two cadmium treatments: 15 {micro}g Cd{center_dot}L{sup {minus}1} in dilution water and a Cd-ethylenediaminetetraacetic acid (Cd-EDTA) complex at 15 {micro}m{center_dot}L{sup {minus}1}, and a water-only control. There were five replicates per treatment and 40 fish per replicate. It was found that EDTA significantly reduced the bioaccumulation of cadmium by tilapia sac-fry by 34%. Based on the results, a second experiment was conducted to evaluate four procedures: a no-rinse control; rinsing in EDTA; rinsing in distilled water; and rinsing in 5% nitric acid, for removing surface-bound Cd from exposed sac-fry. In this experiment, 30 fish in each of five replicates were exposed to 15 {micro}g Cd{center_dot}L{sup {minus}1} for 72 h, processed through the rinse procedures, and analyzed for total Cd. The EDTA rinse treatment significantly reduced (p<0.05) Cd concentrations of the exposed fish relative to those receiving no rinse. It was concluded that the EDTA rinse technique may be useful in studies evaluating the partitioning of surface-bound and accumulated cadmium in small aquatic organisms.

  9. Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Ettler, P.

    Portugalsko: INSTICC – Institute for Systems and Technologies of Information, Control and Communication, 2013 - (Ferrier, Gusikhin, Madani, Sasiadek), s. 380-387 ISBN 978-989-8565-70-9. [10th international conference on informatics in control, automation and robotics (ICINCO 2013). Reykjavík (IS), 29.07.2013-31.07.2013] R&D Projects: GA MŠk 7D12004 Institutional support: RVO:67985556 Keywords : statistical analysis * Bayesian analysis * Truncated distributions Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2013/AS/dedecius-overview of bounded support distributions and methods for bayesian treatment of industrial data.pdf

  10. Finding the Most Distant Quasars Using Bayesian Selection Methods

    CERN Document Server

    Mortlock, Daniel

    2014-01-01

    Quasars, the brightly glowing disks of material that can form around the super-massive black holes at the centres of large galaxies, are amongst the most luminous astronomical objects known and so can be seen at great distances. The most distant known quasars are seen as they were when the Universe was less than a billion years old (i.e., $\\sim\\!7%$ of its current age). Such distant quasars are, however, very rare, and so are difficult to distinguish from the billions of other comparably-bright sources in the night sky. In searching for the most distant quasars in a recent astronomical sky survey (the UKIRT Infrared Deep Sky Survey, UKIDSS), there were $\\sim\\!10^3$ apparently plausible candidates for each expected quasar, far too many to reobserve with other telescopes. The solution to this problem was to apply Bayesian model comparison, making models of the quasar population and the dominant contaminating population (Galactic stars) to utilise the information content in the survey measurements. The result wa...

  11. The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2011-01-01

    This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to...

  12. A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods

    Science.gov (United States)

    Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich

    2013-01-01

    The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…

  13. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming; Fan, Zhun; Li, Youfeng; Feng, Shan

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  14. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    CERN Document Server

    Nagesh, Jayashree; Brumer, Paul

    2014-01-01

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning we reformulate the method in a computationally e?cient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9-((1-naphthyl)-methyl)-anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH2-naphthyl groups as subsystems, and examine their electronic energies and populations for several excited states using Configuration Interaction Singles method. The implemented approach shows a wide variety o...

  15. Landslide hazards mapping using uncertain Naïve Bayesian classification method

    Institute of Scientific and Technical Information of China (English)

    毛伊敏; 张茂省; 王根龙; 孙萍萍

    2015-01-01

    Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naïve Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naïve Bayesian classification method in Baota district of Yan’an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naïve Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves (AUC) in NBU and Naïve Bayesian algorithm are 87.29%and 82.47%respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.

  16. Bayesians in Space: Using Bayesian Methods to Inform Choice of Spatial Weights Matrix in Hedonic Property Analyses

    OpenAIRE

    Mueller, Julie M.; Loomis, John B.

    2010-01-01

    The choice of weights is a non-nested problem in most applied spatial econometric models. Despite numerous recent advances in spatial econometrics, the choice of spatial weights remains exogenously determined by the researcher in empirical applications. Bayesian techniques provide statistical evidence regarding the simultaneous choice of model specification and spatial weights matrices by using posterior probabilities. This paper demonstrates the Bayesian estimation approach in a spatial hedo...

  17. OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES

    Institute of Scientific and Technical Information of China (English)

    Yun-qing Huang; Wei Li; Fang Su

    2006-01-01

    In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.

  18. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  19. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    Science.gov (United States)

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  20. Comparasion of prediction and measurement methods for sound insulation of lightweight partitions

    Directory of Open Access Journals (Sweden)

    Praščević Momir

    2012-01-01

    Full Text Available It is important to know the sound insulation of partitions in order to be able to compare different constructions, calculate acoustic comfort in apartments or noise levels from outdoor sources such as road traffic, and find engineer optimum solutions to noise problems. The use of lightweight partitions as party walls between dwellings has become common because sound insulation requirements can be achieved with low overall surface weights. However, they need greater skill to design and construct, because the overall design is much more complex. It is also more difficult to predict and measure of sound transmission loss of lightweight partitions. There are various methods for predicting and measuring sound insulation of partitions and some of them will be described in this paper. Also, this paper presents a comparison of experimental results of the sound insulation of lightweight partitions with results obtained using different theoretical models for single homogenous panels and double panels with and without acoustic absorption in the cavity between the panels. [Projekat Ministarstva nauke Republike Srbije, br. TR-37020: Development of methodology and means for noise protection from urban areas i br. III-43014: Improvement of the monitoring system and the assessment of a long-term population exposure to pollutant substances in the environment using neural networks

  1. An Indoor Space Partition Method and its Fingerprint Positioning Optimization Considering Pedestrian Accessibility

    Science.gov (United States)

    Xu, Yue; Shi, Yong; Zheng, Xingyu; Long, Yi

    2016-06-01

    Fingerprint positioning method is generally the first choice in indoor navigation system due to its high accuracy and low cost. The accuracy depends on partition density to the indoor space. The accuracy will be higher with higher grid resolution. But the high grid resolution leads to significantly increasing work of the fingerprint data collection, processing and maintenance. This also might decrease the performance, portability and robustness of the navigation system. Meanwhile, traditional fingerprint positioning method use equational grid to partition the indoor space. While used for pedestrian navigation, sometimes a person can be located at the area where he or she cannot access. This paper studied these two issues, proposed a new indoor space partition method considering pedestrian accessibility, which can increase the accuracy of pedestrian position, and decrease the volume of the fingerprint data. Based on this proposed partition method, an optimized algorithm for fingerprint position was also designed. A across linker structure was used for fingerprint point index and matching. Experiment based on the proposed method and algorithm showed that the workload of fingerprint collection and maintenance were effectively decreased, and poisoning efficiency and accuracy was effectively increased

  2. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Jin; Yu, Yaming [Department of Statistics, University of California, Irvine, Irvine, CA 92697-1250 (United States); Van Dyk, David A. [Statistics Section, Imperial College London, Huxley Building, South Kensington Campus, London SW7 2AZ (United Kingdom); Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Connors, Alanna; Meng, Xiao-Li, E-mail: jinx@uci.edu, E-mail: yamingy@ics.uci.edu, E-mail: dvandyk@imperial.ac.uk, E-mail: vkashyap@cfa.harvard.edu, E-mail: asiemiginowska@cfa.harvard.edu, E-mail: jdrake@cfa.harvard.edu, E-mail: pratzlaff@cfa.harvard.edu, E-mail: meng@stat.harvard.edu [Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138 (United States)

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  3. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    International Nuclear Information System (INIS)

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.

  4. Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

    OpenAIRE

    N D

    2009-01-01

    There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such alg...

  5. An Exact Method for Partitioning Dichotomous Items Within the Framework of the Monotone Homogeneity Model.

    Science.gov (United States)

    Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas

    2015-12-01

    The monotone homogeneity model (MHM-also known as the unidimensional monotone latent variable model) is a nonparametric IRT formulation that provides the underpinning for partitioning a collection of dichotomous items to form scales. Ellis (Psychometrika 79:303-316, 2014, doi: 10.1007/s11336-013-9341-5 ) has recently derived inequalities that are implied by the MHM, yet require only the bivariate (inter-item) correlations. In this paper, we incorporate these inequalities within a mathematical programming formulation for partitioning a set of dichotomous scale items. The objective criterion of the partitioning model is to produce clusters of maximum cardinality. The formulation is a binary integer linear program that can be solved exactly using commercial mathematical programming software. However, we have also developed a standalone branch-and-bound algorithm that produces globally optimal solutions. Simulation results and a numerical example are provided to demonstrate the proposed method. PMID:25850618

  6. A Family of Trigonometrically-fitted Partitioned Runge-Kutta Symplectic Methods

    International Nuclear Information System (INIS)

    We are presenting a family of trigonometrically fitted partitioned Runge-Kutta symplectic methods of fourth order with six stages. The solution of the one dimensional time independent Schroedinger equation is considered by trigonometrically fitted symplectic integrators. The Schroedinger equation is first transformed into a Hamiltonian canonical equation. Numerical results are obtained for the one-dimensional harmonic oscillator and the exponential potential

  7. Surveillance system and method having an operating mode partitioned fault classification model

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  8. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  9. Method for chemical amplification based on fluid partitioning in an immiscible liquid

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.

    2015-06-02

    A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.

  10. Using hierarchical Bayesian methods to examine the tools of decision-making

    Directory of Open Access Journals (Sweden)

    Michael D. Lee

    2011-12-01

    Full Text Available Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants. Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.

  11. A Bayesian belief nets based quantification method of qualitative software reliability assessment for PSA

    International Nuclear Information System (INIS)

    Current reliability assessments of safety critical software embedded in the digital systems in nuclear power plants are based on the rule-based qualtitative assessment methods. But practical needs require the quantitative features of software reliability for Probabilistic Safety Assessment (PSA) that is one of important methods being used in assessing the whole safety of nuclear power plant. This paper discusses a Bayesian Belief Nets(BBN) based quantification method that models current qualitative software assessment in formal way and produces quantitative results required for PSA. Commercial Off-The-Shelf(COTS) software dedication process was applied to the discussed BBN based method for evaluating the plausibility of the method in PSA

  12. A Bayesian hybrid method for context-sensitive spelling correction

    CERN Document Server

    Golding, A R

    1996-01-01

    Two classes of methods have been shown to be useful for resolving lexical ambiguity. The first relies on the presence of particular words within some distance of the ambiguous target word; the second uses the pattern of words and part-of-speech tags around the target word. These methods have complementary coverage: the former captures the lexical ``atmosphere'' (discourse topic, tense, etc.), while the latter captures local syntax. Yarowsky has exploited this complementarity by combining the two methods using decision lists. The idea is to pool the evidence provided by the component methods, and to then solve a target problem by applying the single strongest piece of evidence, whatever type it happens to be. This paper takes Yarowsky's work as a starting point, applying decision lists to the problem of context-sensitive spelling correction. Decision lists are found, by and large, to outperform either component method. However, it is found that further improvements can be obtained by taking into account not ju...

  13. An Improved Approximate-Bayesian Model-choice Method for Estimating Shared Evolutionary History

    OpenAIRE

    Oaks, Jamie R.

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa...

  14. Discovering Emergent Behaviors from Tracks Using Hierarchical Non-parametric Bayesian Methods

    OpenAIRE

    Chiron, Guillaume; Gomez-Krämer, Petra; Ménard, Michel

    2014-01-01

    International audience In video-surveillance, non-parametric Bayesian approaches based on a Hierarchical Dirichlet Process (HDP) have recently shown their efficiency for modeling crowed scene activities. This paper follows this track by proposing a method for detecting and clustering emergent behaviors across different captures made of numerous unconstrained trajectories. Most HDP applications for crowed scenes (e.g. traffic, pedestrians) are based on flow motion features. In contrast, we ...

  15. WHY ENTREPRENEUR OVERCONFIDENCE AFFECT ITS PROJECT FINANCIAL CAPABILITY: EVIDENCE FROM TUNISIA USING THE BAYESIAN NETWORK METHOD

    OpenAIRE

    Salima TAKTAK; AZOUZI Mohamed Ali; Triki, Mohamed

    2013-01-01

    This article discusses the effect of the entrepreneur’s profile on financing his creative project. It analyzes the impact of overconfidence on improving perceptions financing capacity of the project. To analyze this relationship we used networks as Bayesian data analysis method. Our sample is composed of 200 entrepreneurs. Our results show a high level of entrepreneur’s overconfidence positively affects the evaluation of financing capacity of the project.

  16. Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs

    OpenAIRE

    Jeffrey O. Oseh (M.Sc.); Olugbenga A. Falode (Ph.D)

    2015-01-01

    Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Baye...

  17. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    OpenAIRE

    Shuai Zhang; Chengyu Xi; Yan Wang; Wenyu Zhang; Yanhong Chen

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M re...

  18. Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal

    OpenAIRE

    Rao, J. N. K.

    2011-01-01

    According to Hansen, Madow and Tepping [J. Amer. Statist. Assoc. 78 (1983) 776--793], "Probability sampling designs and randomization inference are widely accepted as the standard approach in sample surveys." In this article, reasons are advanced for the wide use of this design-based approach, particularly by federal agencies and other survey organizations conducting complex large scale surveys on topics related to public policy. Impact of Bayesian methods in survey sampling is also discussed...

  19. The evaluation of the equilibrium partitioning method using sensitivity distributions of species in water and soil or sediment

    NARCIS (Netherlands)

    Beelen P van; Verbruggen EMJ; Peijnenburg WJGM; ECO

    2002-01-01

    The equilibrium partitioning method (EqP-method) can be used to derive environmental quality standards (like the Maximum Permissible Concentration or the intervention value) for soil or sediment, from aquatic toxicity data and a soil/water or sediment/water partitioning coefficient. The validity of

  20. Baltic sea algae analysis using Bayesian spatial statistics methods

    OpenAIRE

    Eglė Baltmiškytė; Kęstutis Dučinskas

    2013-01-01

    Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential) are built to estima...

  1. Distinguishing niche and neutral processes: Issues in variation partitioning statistical methods and further perspectives

    Directory of Open Access Journals (Sweden)

    Youhua Chen

    2015-06-01

    Full Text Available Variance partitioning methods, which are built upon multivariate statistics, have been widely applied in different taxa and habitats in community ecology. Here, I performed a literature review on the development and application of the methods, and then discussed the limitation of available methods and the difficulties involved in sampling schemes. The central goal of the work is then to propose some potential practical methods that might help to overcome different issues of traditional least-square-based regression modeling. A variety of regression models has been considered for comparison. In initial simulations, I identified that generalized additive model (GAM has the highest accuracy to predict variation components. Therefore, I argued that other advanced regression techniques, including the GAM and related models, could be utilized in variation partitioning for better quantifying the aggregation scenarios of species distribution.

  2. Computationally intensive methods in Bayesian model-structure identification

    Czech Academy of Sciences Publication Activity Database

    Tesař, Ludvík

    Adelaide: Advanced Knowledge International, 2004 - ( And rýsek, J.; Kárný, M.; Kracík, J.), s. 75-79. (International Series on Advanced Intelligence .. 9). ISBN 0-9751004-5-9. [Workshop on Computer-Intensive Methods in Control and Data Processing 2004. Prague (CZ), 12.05.2004-14.05.2004] R&D Projects: GA ČR GA102/03/0049; GA AV ČR IBS1075351 Institutional research plan: CEZ:AV0Z1075907 Keywords : structure identification * system identification * structure estimation Subject RIV: BD - Theory of Information

  3. Blending Bayesian and frequentist methods according to the precision of prior information with an application to hypothesis testing

    CERN Document Server

    Bickel, David R

    2011-01-01

    The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null hypothesis leads the statistician to report a posterior probability of the hypothesis that is informed by both Bayesian and frequentist methodology, each weighted according how well the prior is known. Since neither the Bayesian approach nor the frequentist approach is entirely satisfactory in situations involving partial knowledge of the prior distribution, the proposed procedure reduces to a Bayesian method given complete knowledge of the prior, to a frequentist method given complete ignorance about the prior, and to a...

  4. Liver segmentation in MRI: a fully automatic method based on stochastic partitions

    OpenAIRE

    López-Mir, Fernando; Naranjo Ornedo, Valeriana; Angulo, J.; Alcañiz Raya, Mariano Luis; Luna, L.

    2014-01-01

    There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marke...

  5. A generalized bayesian inference method for constraining the interiors of super Earths and sub-Neptunes

    CERN Document Server

    Dorn, C; Khan, A; Heng, K; Alibert, Y; Helled, R; Rivoldini, A; Benz, W

    2016-01-01

    We aim to present a generalized Bayesian inference method for constraining interiors of super Earths and sub-Neptunes. Our methodology succeeds in quantifying the degeneracy and correlation of structural parameters for high dimensional parameter spaces. Specifically, we identify what constraints can be placed on composition and thickness of core, mantle, ice, ocean, and atmospheric layers given observations of mass, radius, and bulk refractory abundance constraints (Fe, Mg, Si) from observations of the host star's photospheric composition. We employed a full probabilistic Bayesian inference analysis that formally accounts for observational and model uncertainties. Using a Markov chain Monte Carlo technique, we computed joint and marginal posterior probability distributions for all structural parameters of interest. We included state-of-the-art structural models based on self-consistent thermodynamics of core, mantle, high-pressure ice, and liquid water. Furthermore, we tested and compared two different atmosp...

  6. Bayesian Blocks, A New Method to Analyze Structure in Photon Counting Data

    CERN Document Server

    Scargle, J D

    1997-01-01

    I describe a new time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: time-tagged photon events (TTE), binned counts, or time-to-spill (TTS) data. The output is the most likely segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant -- i.e. has a fixed intensity without statistically significant variations. Since the analysis is based on Bayesian statistics, I call the resulting structures Bayesian Blocks. Unlike most, this method does not stipulate time bins -- instead the data themselves determine a piecewise constant representation. Therefore the analysis procedure itself does not impose a lower limit to the time scale on which variability can be detected. Locations, amplitudes, and rise and decay times of pulses within a time series can be estimated, independent of any pulse-shape model -- but only if they d...

  7. Bayesian approach to color-difference models based on threshold and constant-stimuli methods.

    Science.gov (United States)

    Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo

    2015-06-15

    An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation. PMID:26193510

  8. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2013-01-01

    Full Text Available Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP to search for the optimal procurement scheme (OPS. Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services’ attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  9. A new method for E-government procurement using collaborative filtering and Bayesian approach.

    Science.gov (United States)

    Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach. PMID:24385869

  10. A method for calculating Bayesian uncertainties on internal doses resulting from complex occupational exposures

    International Nuclear Information System (INIS)

    Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)

  11. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    Science.gov (United States)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  12. A novel method to augment extraction of mangiferin by application of microwave on three phase partitioning

    Directory of Open Access Journals (Sweden)

    Vrushali M. Kulkarni

    2015-06-01

    Full Text Available This work reports a novel approach where three phase partitioning (TPP was combined with microwave for extraction of mangiferin from leaves of Mangifera indica. Soxhlet extraction was used as reference method, which yielded 57 mg/g in 5 h. Under optimal conditions such as microwave irradiation time 5 min, ammonium sulphate concentration 40% w/v, power 272 W, solute to solvent ratio 1:20, slurry to t-butanol ratio 1:1, soaking time 5 min and duty cycle 50%, the mangiferin yield obtained was 54 mg/g by microwave assisted three phase partitioning extraction (MTPP. Thus extraction method developed resulted into higher extraction yield in a shorter span, thereby making it an interesting alternative prior to down-stream processing.

  13. Probabilistic divide-and-conquer: a new exact simulation method, with integer partitions as an example

    OpenAIRE

    Arratia, Richard; DeSalvo, Stephen

    2011-01-01

    We propose a new method, probabilistic divide-and-conquer, for improving the success probability in rejection sampling. For the example of integer partitions, there is an ideal recursive scheme which improves the rejection cost from asymptotically order $n^{3/4}$ to a constant. We show other examples for which a non--recursive, one--time application of probabilistic divide-and-conquer removes a substantial fraction of the rejection sampling cost. We also present a variation of probabilistic d...

  14. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    Energy Technology Data Exchange (ETDEWEB)

    Nagesh, Jayashree; Brumer, Paul [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Izmaylov, Artur F. [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Department of Physical and Environmental Sciences, University of Toronto, Scarborough, Toronto, Ontario M1C 1A4 (Canada)

    2015-02-28

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.

  15. Bayesian inference for data assimilation using Least-Squares Finite Element methods

    International Nuclear Information System (INIS)

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

  16. The method of evaluating quantum partition function for the Hubbard model

    International Nuclear Information System (INIS)

    The method of evaluation of quantum partition function (QPF) in some four fermion models is proposed. The calculations are carried out by the path integral method. The integral is evaluated by introducing the additional fields (called Hubbard-Stratanovich transformation in some models), integration over fermionic variables, and considering the finite-dimensional approximation of rest integral over bosonic fields in the infinite limit. The result can be represented as a sum of the functional derivatives with respect to the arbitrary bosonic field of the quantum partition of free fermionic theory in the external bosonic field. This expression can be treated in a mean field approximation in closed form (the determinants corresponding to the arbitrary external field are substituted by its mean values corresponding to the mean value of the external fields). The quantum partition function is represented as the integral representation of the function. The approximation for the QPF of the free theory is considered, and the corresponding answer for QPF is studied. A convenient perturbation expansion for ln Z is developed. (author). 6 refs, 1 fig

  17. Comparison of Two Partitioning Methods in a Fuzzy Time Series Model for Composite Index Forecasting

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah,

    2011-04-01

    Full Text Available Study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling vague and incomplete data. A variety of forecasting models have devoted to improve forecasting accuracy. Recently, Fuzzy time-series based on Fibonacci sequence has been proposed as a new fuzzy time series model whichincorporates the concept of the Fibonacci sequence, the framework of basic fuzzy time series model and the weighted method. However, the issue on lengths of intervals has not been investigated by the highly acclaimed model despite already affirmed that length of intervals could affects forecasting results. Therefore the purpose of this paper is to propose two methods of defining interval lengths into fuzzy time-series based on Fibonacci sequence model and compare their performances. Frequency density-based partitioning and randomly chosen lengths of interval partitioning were tested into fuzzy time-series based on Fibonacci sequence model using stock index data and compared their performances. A two-year weekly period of Kuala Lumpur Composite Index stock index data was employed as experimental data sets. The results show that the frequency density based partitioning outperforms the randomly chosen length of interval. This result reaffirms the importance of defining the appropriate interval lengths in fuzzy time series forecasting performances.

  18. Bayesian and maximum entropy methods for fusion diagnostic measurements with compact neutron spectrometers

    Science.gov (United States)

    Reginatto, Marcel; Zimbal, Andreas

    2008-02-01

    In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.

  19. The study of partition and solidification with Super High Temperature Method

    International Nuclear Information System (INIS)

    On the study of fission products (FPs) management, cold experiments of Cs partition, noble metal recovery and solidification of residual FPs are carried out with Super High Temperature Method. FPs are melted or sintered without much additives in this method. Cs is separated by its vaporization at 1000degC. Noble metals are reduced to metal at 1800degC and recovered. Other FPs very small ceramics containing alkali earth, zirconium (Zr) and actinide (U, Pu, TRU). This process does not produce new type waste such as extraction solvent wastes. The rationalization of FPs management may be achieved by this method. (author)

  20. A Bayesian calibration model for combining different pre-processing methods in Affymetrix chips

    Directory of Open Access Journals (Sweden)

    Richardson Sylvia

    2008-12-01

    Full Text Available Abstract Background In gene expression studies a key role is played by the so called "pre-processing", a series of steps designed to extract the signal and account for the sources of variability due to the technology used rather than to biological differences between the RNA samples. At the moment there is no commonly agreed gold standard pre-processing method and each researcher has the responsibility to choose one method, incurring the risk of false positive and false negative features arising from the particular method chosen. Results We propose a Bayesian calibration model that makes use of the information provided by several pre-processing methods and we show that this model gives a better assessment of the 'true' unknown differential expression between two conditions. We demonstrate how to estimate the posterior distribution of the differential expression values of interest from the combined information. Conclusion On simulated data and on the spike-in Latin Square dataset from Affymetrix the Bayesian calibration model proves to have more power than each pre-processing method. Its biological interest is demonstrated through an experimental example on publicly available data.

  1. An objective method for partitioning the entire flood season into multiple sub-seasons

    Science.gov (United States)

    Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong; Liu, Pan

    2015-09-01

    Information on flood seasonality is required in many practical applications, such as seasonal frequency analysis and reservoir operation. Several statistical methods for identifying flood seasonality have been widely used, such as directional method (DS) and relative frequency (RF) method. However, using these methods, flood seasons are identified subjectively by visually assessing the temporal distribution of flood occurrences. In this study, a new method is proposed to identify flood seasonality and partition the entire flood season into multiple sub-seasons objectively. A statistical experiment was carried out to evaluate the performance of the proposed method. Results demonstrated that the proposed method performed satisfactorily. Then the proposed approach was applied to the Geheyan and Baishan Reservoirs, China, having different flood regimes. It is shown that the proposed method performs extremely well for the observed data, and is more objective than the traditional methods.

  2. Using Markov Chain Monte Carlo methods to solve full Bayesian modeling of PWR vessel flaw distributions

    International Nuclear Information System (INIS)

    We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed

  3. Study On Method For Simulation Of Partitioning Tracers In Double Porosity Model Of Fractured Basement Formations

    International Nuclear Information System (INIS)

    Single well tracer test (SWTT) has been widely used and accepted as a standard method for residual oil saturation (SOR) measurement in the field. The test involves injecting of the partitioning tracers into the reservoir, producing them back and matching their profiles using a suitable simulation program. Most of simulation programs were first developed for sandstone reservoir using single porosity model cannot be applied for highly heterogeneous reservoirs such as fractured basement and carbonate reservoirs. Therefore a simulation code in double porosity model is needed to simulate tracer flow in our fractured basement reservoirs. In this project, a finite-difference simulation code has been developed by following the Tang mathematical model to simulate the partitioning tracers in double porosity medium. The code was matched with several field tracer data and compare with results of the University of Texas chemical simulator showing an acceptable agreement between our program and the famous UTChem simulator. Besides, several experiments were conducted to measure residual oil saturation in 1D column and a 2D sandpad model. Results of the experiments show that the partitioning tracers can measure residual oil saturation in glass bead models with a relatively high accuracy when the flow velocity of tracer is sufficiently low. (author)

  4. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP is fo...... using the depth-first search of the Branch & Bound tree. Preliminarily results are encouraging, showing that nearly all tested real-life instances produce integer solutions to the LP relaxation and solutions are found within a few seconds....... formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...... a certain time period, the LP relaxation of the set partitioning model is solved with column generation. If a feasible solution is not found, further drivers are gradually added to the problem or the optimization time period is increased. Fractions are resolved with a constraint branching strategy...

  5. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    Science.gov (United States)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  6. A Study of New Method for Weapon System Effectiveness Evaluation Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    YAN Dai-wei; GU Liang-xian; PAN Lei

    2008-01-01

    As weapon system effectiveness is affected by many factors, its evaluation is essentially a multi-criterion decision making problem for its complexity. The evaluation model of the effectiveness is established on the basis of metrics architecture of the effectiveness. The Bayesian network, which is used to evaluate the effectiveness, is established based on the metrics architecture and the evaluation models. For getting the weights of the metrics by Bayesian network, subjective initial values of the weights are given, gradient ascent algorithm is adopted, and the reasonable values of the weights are achieved. And then the effectiveness of every weapon system project is gained. The weapon system, whose effectiveness is relative maximum, is the optimization system. The research result shows that this method can solve the problem of AHP method which evaluation results are not compatible to the practice results and overcome the shortcoming of neural network in multilayer and multi-criterion decision. The method offers a new approaeh for evaluating the effectiveness.

  7. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  8. An Advanced Bayesian Method for Short-Term Probabilistic Forecasting of the Generation of Wind Power

    Directory of Open Access Journals (Sweden)

    Antonio Bracale

    2015-09-01

    Full Text Available Currently, among renewable distributed generation systems, wind generators are receiving a great deal of interest due to the great economic, technological, and environmental incentives they involve. However, the uncertainties due to the intermittent nature of wind energy make it difficult to operate electrical power systems optimally and make decisions that satisfy the needs of all the stakeholders of the electricity energy market. Thus, there is increasing interest determining how to forecast wind power production accurately. Most the methods that have been published in the relevant literature provided deterministic forecasts even though great interest has been focused recently on probabilistic forecast methods. In this paper, an advanced probabilistic method is proposed for short-term forecasting of wind power production. A mixture of two Weibull distributions was used as a probability function to model the uncertainties associated with wind speed. Then, a Bayesian inference approach with a particularly-effective, autoregressive, integrated, moving-average model was used to determine the parameters of the mixture Weibull distribution. Numerical applications also are presented to provide evidence of the forecasting performance of the Bayesian-based approach.

  9. Quantile pyramids for Bayesian nonparametrics

    OpenAIRE

    2009-01-01

    P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...

  10. Cache-effiziente Block-Matrix-Löser für die Partition of Unity Methode

    OpenAIRE

    Gründer, Patrick

    2012-01-01

    Die Partition of Unity Methode findet Anwendung in gitterlosen Diskretisierungsverfahren zum Lösen elliptischer partieller Differentialgleichungen. Die bei der Diskretisierung entstehenden Gleichungssysteme besitzen eine Blockstruktur, die sich mittels der Multilevel Partition of Unity Methode asymptotisch optimal lösen lassen. Ein alternatives Verfahren zum Lösen dieser Gleichungssysteme stellen die vorkonditionierten Krylow- Unterraumverfahren dar. In dieser Arbeit wird ein auf der ILU-Zerl...

  11. Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method

    Energy Technology Data Exchange (ETDEWEB)

    Nikhil V. Bhagwat

    2005-12-17

    In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.

  12. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  13. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  14. Some bounds on quantum partition functions by path-integral methods

    International Nuclear Information System (INIS)

    Equilibrium statistical mechanics requires the competition of the partition function. The density matrix and hence the quantum partition function may be expressed as an integral with an integral which can be given explicitly, namely as a (Wiener-) path integral. Techniques especially designed for path integrals provide inequalities for density matrices, partition functions and spectral densities. Some of these inequalities related to density matrices and partition functions are reviewed in this paper. 39 refs

  15. A Bayesian Method For Finding Galaxies That Cause Quasar Absorption Lines

    Science.gov (United States)

    Shoemaker, Emileigh Suzanne; Laubner, David Andrew; Scott, Jennifer E.

    2016-01-01

    We present a study of candidate absorber-galaxy pairs for 39 low redshift quasar sightlines (0.06 Digital Sky Survey (SDSS). We downloaded the COS linelists for these quasar spectra from MAST and queried the SDSS DR12 database for photometric data on all galaxies within 1 Mpc of each of these quasar lines of sight. We calculated photometric redshifts for all the SDSS galaxies using the Bayesian Photometric Redshift code. We used all these absorber and galaxy data as input into an absorber-galaxy matching code which also employs a Bayesian scheme, along with known statistics of the intergalactic medium and circumgalactic media of galaxies, for finding the most probable galaxy match for each absorber. We compare our candidate absorber-galaxy matches to existing studies in the literature and explore trends in the absorber and galaxy properties among the matched and non-matched populations. This method of matching absorbers and galaxies can be used to find targets for follow up spectroscopic studies.

  16. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Science.gov (United States)

    Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama

    2016-03-01

    Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India. PMID:26977703

  17. Evolutionary Analysis of Dengue Serotype 2 Viruses Using Phylogenetic and Bayesian Methods from New Delhi, India.

    Directory of Open Access Journals (Sweden)

    Nazia Afreen

    2016-03-01

    Full Text Available Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.

  18. Comparison of prediction methods for octanol-air partition coefficients of diverse organic compounds.

    Science.gov (United States)

    Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying

    2016-04-01

    The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. PMID:26802270

  19. Partition method for impact dynamics of flexible multibody systems based on contact constraint

    Institute of Scientific and Technical Information of China (English)

    段玥晨; 章定国; 洪嘉振

    2013-01-01

    The impact dynamics of a flexible multibody system is investigated. By using a partition method, the system is divided into two parts, the local impact region and the region away from the impact. The two parts are connected by specific boundary conditions, and the system after partition is equivalent to the original system. According to the rigid-flexible coupling dynamic theory of multibody system, system’s rigid-flexible coupling dynamic equations without impact are derived. A local impulse method for establishing the initial impact conditions is proposed. It satisfies the compatibility con-ditions for contact constraints and the actual physical situation of the impact process of flexible bodies. Based on the contact constraint method, system’s impact dynamic equa-tions are derived in a differential-algebraic form. The contact/separation criterion and the algorithm are given. An impact dynamic simulation is given. The results show that system’s dynamic behaviors including the energy, the deformations, the displacements, and the impact force during the impact process change dramatically. The impact makes great effects on the global dynamics of the system during and after impact.

  20. An urban flood risk assessment method using the Bayesian Network approach

    DEFF Research Database (Denmark)

    Åström, Helena Lisa Alexandra

    Flooding is one of the most damaging natural hazards to human societies. Recent decades have shown that flooding constitutes major threats worldwide, and due to anticipated climate change the occurrence of damaging flood events is expected to increase. Urban areas are especially vulnerable to...... flood risk scoping, flood risk assessment (FRA), and adaptation implementation and involves an ongoing process of assessment, reassessment, and response. This thesis mainly focuses on the FRA phase of FRM. FRA includes hazard analysis and impact assessment (combined called a risk analysis), adaptation...... Bayesian Network (BN) approach is developed, and the method is exemplified in an urban catchment. BNs have become an increasingly popular method for describing complex systems and aiding decision-making under uncertainty. In environmental management, BNs have mainly been utilized in ecological assessments...

  1. Spectral energy distribution modelling of Southern candidate massive protostars using the Bayesian inference method

    CERN Document Server

    Hill, T; Minier, V; Burton, M G; Cunningham, M R

    2008-01-01

    Concatenating data from the millimetre regime to the infrared, we have performed spectral energy distribution modelling for 227 of the 405 millimetre continuum sources of Hill et al. (2005) which are thought to contain young massive stars in the earliest stages of their formation. Three main parameters are extracted from the fits: temperature, mass and luminosity. The method employed was Bayesian inference, which allows a statistically probable range of suitable values for each parameter to be drawn for each individual protostellar candidate. This is the first application of this method to massive star formation. The cumulative distribution plots of the SED modelled parameters in this work indicate that collectively, the sources without methanol maser and/or radio continuum associations (MM-only cores) display similar characteristics to those of high mass star formation regions. Attributing significance to the marginal distinctions between the MM-only cores and the high-mass star formation sample we draw hypo...

  2. The Bayesian Bootstrap

    OpenAIRE

    Rubin, Donald B.

    1981-01-01

    The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...

  3. Development of partitioning method. Back-extraction of uranium from DIDPA solvent

    International Nuclear Information System (INIS)

    A partitioning method has been developed under the concepts of separation of elements in high level liquid waste generated from nuclear fuel reprocessing according to their half lives and radiological toxicity and of disposal of them by suitable methods. In the partitioning process developed in JAERI solvent, extraction with DIDPA (di-isodecyl phosphoric acid) was adopted for actinide separation. The present paper describes the results of study on back-extraction of hexavalent uranium from DIDPA. Most experiments were carried out to select a suitable reagent for back-extraction of U (VI) extracted from 0.5M nitric acid with DIDPA. The experimental results show that distribution ratios of U (VI) is less than 0.1 in the back-extractions with 1.5M sodium carbonate-15 vol% alcohol or 20wt% hydrazine carbonate-10 vol% alcohol. Uranium in the sodium carbonate solution were recovered by anion-exchange with strong-base resins and eluted by NH4NO3 and other reagents. The results of the present study confirm the validity of the DIDPA extraction process; U, Pu, Np, Am and Cm in HLW are extracted simultaneously with DIDPA, and they are recovered from DIDPA with various reagent: nitric acid for Am and Cm, oxalic acid for Np and Pu, and sodium carbonate or hydrazine carbonate for U. (author)

  4. Development of partitioning method. Back-extraction of uranium from DIDPA solvent

    Energy Technology Data Exchange (ETDEWEB)

    Tatsugae, Ryozo; Kubota, Masumitsu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Shirahashi, Koichi

    1995-03-01

    A partitioning method has been developed under the concepts of separation of elements in high level liquid waste generated from nuclear fuel reprocessing according to their half lives and radiological toxicity and of disposal of them by suitable methods. In the partitioning process developed in JAERI solvent, extraction with DIDPA (di-isodecyl phosphoric acid) was adopted for actinide separation. The present paper describes the results of study on back-extraction of hexavalent uranium from DIDPA. Most experiments were carried out to select a suitable reagent for back-extraction of U (VI) extracted from 0.5M nitric acid with DIDPA. The experimental results show that distribution ratios of U (VI) is less than 0.1 in the back-extractions with 1.5M sodium carbonate-15 vol% alcohol or 20wt% hydrazine carbonate-10 vol% alcohol. Uranium in the sodium carbonate solution were recovered by anion-exchange with strong-base resins and eluted by NH{sub 4}NO{sub 3} and other reagents. The results of the present study confirm the validity of the DIDPA extraction process; U, Pu, Np, Am and Cm in HLW are extracted simultaneously with DIDPA, and they are recovered from DIDPA with various reagent: nitric acid for Am and Cm, oxalic acid for Np and Pu, and sodium carbonate or hydrazine carbonate for U. (author).

  5. A new statistical precipitation downscaling method with Bayesian model averaging: a case study in China

    Science.gov (United States)

    Zhang, Xianliang; Yan, Xiaodong

    2015-11-01

    A new statistical downscaling method was developed and applied to downscale monthly total precipitation from 583 stations in China. Generally, there are two steps involved in statistical downscaling: first, the predictors are selected (large-scale variables) and transformed; and second, a model between the predictors and the predictand (in this case, precipitation) is established. In the first step, a selection process of the predictor domain, called the optimum correlation method (OCM), was developed to transform the predictors. The transformed series obtained by the OCM showed much better correlation with the predictand than those obtained by the traditional transform method for the same predictor. Moreover, the method combining OCM and linear regression obtained better downscaling results than the traditional linear regression method, suggesting that the OCM could be used to improve the results of statistical downscaling. In the second step, Bayesian model averaging (BMA) was adopted as an alternative to linear regression. The method combining the OCM and BMA showed much better performance than the method combining the OCM and linear regression. Thus, BMA could be used as an alternative to linear regression in the second step of statistical downscaling. In conclusion, the downscaling method combining OCM and BMA produces more accurate results than the multiple linear regression method when used to statistically downscale large-scale variables.

  6. Suspected pulmonary embolism and lung scan interpretation: Trial of a Bayesian reporting method

    International Nuclear Information System (INIS)

    The objective of this research is to determine whether a Bayesian method of lung scan (LS) reporting could influence the management of patients with suspected pulmonary embolism (PE). The study is performed by the following: (1) A descriptive study of the diagnostic process for suspected PE using the new reporting method; (2) a non-experimental evaluation of the reporting method comparing prospective patients and historical controls; and (3) a survey of physicians' reactions to the reporting innovation. Of 148 consecutive patients enrolled at the time of LS, 129 were completely evaluated; 75 patients scanned the previous year served as controls. The LS results of patients with suspected PE were reported as posttest probabilities of PE calculated from physician-provided pretest probabilities and the likelihood ratios for PE of LS interpretations. Despite the Bayesian intervention, the confirmation or exclusion of PE was often based on inconclusive evidence. PE was considered by the clinician to be ruled out in 98% of patients with posttest probabilities less than 25% and ruled in for 95% of patients with posttest probabilities greater than 75%. Prospective patients and historical controls were similar in terms of tests ordered after the LS (e.g., pulmonary angiography). Patients with intermediate or indeterminate lung scan results had the highest proportion of subsequent testing. Most physicians (80%) found the reporting innovation to be helpful, either because it confirmed clinical judgement (94 cases) or because it led to additional testing (7 cases). Despite the probabilistic guidance provided by the study, the diagnosis of PE was often neither clearly established nor excluded. While physicians appreciated the innovation and were not confused by the terminology, their clinical decision making was not clearly enhanced

  7. From "weight of evidence" to quantitative data integration using multicriteria decision analysis and Bayesian methods.

    Science.gov (United States)

    Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas

    2015-01-01

    "Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis. PMID:25592482

  8. Prediction of imipramine serum levels in enuretic children by a Bayesian method: comparison with two other conventional dosing methods.

    Science.gov (United States)

    Fernández de Gatta, M M; Tamayo, M; García, M J; Amador, D; Rey, F; Gutiérrez, J R; Domínguez-Gil Hurlé, A

    1989-11-01

    The aim of the present study was to characterize the kinetic behavior of imipramine (IMI) and desipramine in enuretic children and to evaluate the performance of different methods for dosage prediction based on individual and/or population data. The study was carried out in 135 enuretic children (93 boys) ranging in age between 5 and 13 years undergoing treatment with IMI in variable single doses (25-75 mg/day) administered at night. Sampling time was one-half the dosage interval at steady state. The number of data available for each patient varied (1-4) and was essentially limited by clinical criteria. Pharmacokinetic calculations were performed using a simple proportional relationship (method 1) and a multiple nonlinear regression program (MULTI 2 BAYES) with two different options: using the ordinary least-squares method (method 2) and the least-squares method based on the Bayesian algorithm (method 3). The results obtained point to a coefficient of variation for the level/dose ratio of the drug (58%) that is significantly lower than that of the metabolite (101.4%). The forecasting capacity of method 1 is deficient both regarding accuracy [mean prediction error (MPE) = -5.48 +/- 69.15] and precision (root mean squared error = 46.42 +/- 51.39). The standard deviation of the MPE (69) makes the method unacceptable from the clinical point of view. The more information that is available concerning the serum levels, the greater are the accuracy and precision of methods (2 and 3). With the Bayesian method, less information on drug serum levels is needed to achieve clinically acceptable predictions. PMID:2595743

  9. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    Burgess, Stephen; Thompson, Simon G; Andrews, G;

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context of...... multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable. This...... can be implemented using a Bayesian approach, which is next extended to include multiple genetic markers. We then propose a hierarchical model for undertaking a meta-analysis of multiple studies, in which it is not necessary that the same genetic markers are measured in each study. This provides an...

  10. Model Based Beamforming and Bayesian Inversion Signal Processing Methods for Seismic Localization of Underground Source

    DEFF Research Database (Denmark)

    Oh, Geok Lian

    This PhD study examines the use of seismic technology for the problem of detecting underground facilities, whereby a seismic source such as a sledgehammer is used to generate seismic waves through the ground, sensed by an array of seismic sensors on the ground surface, and recorded by the digital...... device. The concept is similar to the techniques used in exploration seismology, in which explosions (that occur at or below the surface) or vibration wave-fronts generated at the surface reflect and refract off structures at the ground depth, so as to generate the ground profile of the elastic material...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...

  11. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  12. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  13. Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring

    Science.gov (United States)

    Huff, Daniel W.

    Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time

  14. Development of a full automation solid phase microextraction method for investigating the partition coefficient of organic pollutant in complex sample.

    Science.gov (United States)

    Jiang, Ruifen; Lin, Wei; Wen, Sijia; Zhu, Fang; Luan, Tiangang; Ouyang, Gangfeng

    2015-08-01

    A fully automated solid phase microextraction (SPME) depletion method was developed to study the partition coefficient of organic compound between complex matrix and water sample. The SPME depletion process was conducted by pre-loading the fiber with a specific amount of organic compounds from a proposed standard gas generation vial, and then desorbing the fiber into the targeted samples. Based on the proposed method, the partition coefficients (Kmatrix) of 4 polyaromatic hydrocarbons (PAHs) between humic acid (HA)/hydroxypropyl-β-cyclodextrin (β-HPCD) and aqueous sample were determined. The results showed that the logKmatrix of 4 PAHs with HA and β-HPCD ranged from 3.19 to 4.08, and 2.45 to 3.15, respectively. In addition, the logKmatrix values decreased about 0.12-0.27 log units for different PAHs for every 10°C increase in temperature. The effect of temperature on the partition coefficient followed van't Hoff plot, and the partition coefficient at any temperature can be predicted based on the plot. Furthermore, the proposed method was applied for the real biological fluid analysis. The partition coefficients of 6 PAHs between the complex matrices in the fetal bovine serum and water were determined, and compared to ones obtained from SPME extraction method. The result demonstrated that the proposed method can be applied to determine the sorption coefficients of hydrophobic compounds between complex matrix and water in a variety of samples. PMID:26118804

  15. Efficient Methods for Bayesian Uncertainty Analysis and Global Optimization of Computationally Expensive Environmental Models

    Science.gov (United States)

    Shoemaker, Christine; Espinet, Antoine; Pang, Min

    2015-04-01

    Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.

  16. A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.

    Directory of Open Access Journals (Sweden)

    Kai Zhang

    Full Text Available In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method, for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.

  17. Locating disease genes using Bayesian variable selection with the Haseman-Elston method

    Directory of Open Access Journals (Sweden)

    He Qimei

    2003-12-01

    Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.

  18. A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification

    Directory of Open Access Journals (Sweden)

    Yongjun Piao

    2015-01-01

    Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.

  19. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    Science.gov (United States)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  20. Analytical Classification of Multimedia Index Structures by Using a Partitioning Method-Based Framework

    CERN Document Server

    keyvanpour, Mohammadreza

    2011-01-01

    Due to the advances in hardware technology and increase in production of multimedia data in many applications, during the last decades, multimedia databases have become increasingly important. Contentbased multimedia retrieval is one of an important research area in the field of multimedia databases. Lots of research on this field has led to proposition of different kinds of index structures to support fast and efficient similarity search to retrieve multimedia data from these databases. Due to variety and plenty of proposed index structures, we suggest a systematic framework based on partitioning method used in these structures to classify multimedia index structures, and then we evaluated these structures based on important functional measures. We hope this proposed framework will lead to empirical and technical comparison of multimedia index structures and development of more efficient structures at future.

  1. Surveillance system and method having parameter estimation and operating mode partitioning

    Science.gov (United States)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.

  2. Breast histopathology image segmentation using spatio-colour-texture based graph partition method.

    Science.gov (United States)

    Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N

    2016-06-01

    This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. PMID:26708167

  3. Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases

    Institute of Scientific and Technical Information of China (English)

    郑建军; 刘玉树; 陈立潮

    2003-01-01

    The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.

  4. Effective updating process of seismic fragilities using Bayesian method and information entropy

    International Nuclear Information System (INIS)

    Seismic probabilistic safety assessment (SPSA) is an effective method for evaluating overall performance of seismic safety of a plant. Seismic fragilities are estimated to quantify the seismically induced accident sequences. It is a great concern that the SPSA results involve uncertainties, a part of which comes from the uncertainty in the seismic fragility of equipment and systems. A straightforward approach to reduce the uncertainty is to perform a seismic qualification test and to reflect the results on the seismic fragility estimate. In this paper, we propose a figure-of-merit to find the most cost-effective condition of the seismic qualification tests about the acceleration level and number of components tested. Then a mathematical method to reflect the test results on the fragility update is developed. A Bayesian method is used for the fragility update procedure. Since a lognormal distribution that is used for the fragility model does not have a Bayes conjugate function, a parameterization method is proposed so that the posterior distribution expresses the characteristics of the fragility. The information entropy is used as the figure-of-merit to express importance of obtained evidence. It is found that the information entropy is strongly associated with the uncertainty of the fragility. (author)

  5. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems

    Science.gov (United States)

    Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark

    2016-03-01

    The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.

  6. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  7. Stress partitioning behavior in an fcc alloy evaluated by the in situ/ex situ EBSD-Wilkinson method

    International Nuclear Information System (INIS)

    Hierarchical stress partitioning behavior among grains in the elasto-plastic region of a polycrystalline material was studied by a combined technique of in situ/ex situ electron backscattering diffraction based on local strain measurements (the EBSD-Wilkinson method) and neutron diffraction measurements during tensile deformation. Elastic strains parallel to the tensile direction both during loading (e11) and after unloading (e'11) were measured. The volume-averaged stress partitioning among [hkl] family grains measured by the EBSD-Wilkinson method was in good agreement with that measured by neutron diffraction measurements, but a more complicated strain distribution occurred microscopically because of restriction from the surrounding grains.

  8. Bayesian methods outperform parsimony but at the expense of precision in the estimation of phylogeny from discrete morphological data.

    Science.gov (United States)

    O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J

    2016-04-01

    Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266

  9. Post hoc Analysis for Detecting Individual Rare Variant Risk Associations Using Probit Regression Bayesian Variable Selection Methods in Case-Control Sequencing Studies.

    Science.gov (United States)

    Larson, Nicholas B; McDonnell, Shannon; Albright, Lisa Cannon; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham; MacInnis, Robert; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catolona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J

    2016-09-01

    Rare variants (RVs) have been shown to be significant contributors to complex disease risk. By definition, these variants have very low minor allele frequencies and traditional single-marker methods for statistical analysis are underpowered for typical sequencing study sample sizes. Multimarker burden-type approaches attempt to identify aggregation of RVs across case-control status by analyzing relatively small partitions of the genome, such as genes. However, it is generally the case that the aggregative measure would be a mixture of causal and neutral variants, and these omnibus tests do not directly provide any indication of which RVs may be driving a given association. Recently, Bayesian variable selection approaches have been proposed to identify RV associations from a large set of RVs under consideration. Although these approaches have been shown to be powerful at detecting associations at the RV level, there are often computational limitations on the total quantity of RVs under consideration and compromises are necessary for large-scale application. Here, we propose a computationally efficient alternative formulation of this method using a probit regression approach specifically capable of simultaneously analyzing hundreds to thousands of RVs. We evaluate our approach to detect causal variation on simulated data and examine sensitivity and specificity in instances of high RV dimensionality as well as apply it to pathway-level RV analysis results from a prostate cancer (PC) risk case-control sequencing study. Finally, we discuss potential extensions and future directions of this work. PMID:27312771

  10. An automated method for estimating reliability of grid systems using Bayesian networks

    International Nuclear Information System (INIS)

    Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.

  11. Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options

  12. Evaluating propagation method performance over time with Bayesian updating: an application to incubator testing

    Science.gov (United States)

    Converse, Sarah J.; Chandler, J. N.; Olsen, G.H.; Shafer, C. C.

    2010-01-01

    In captive-rearing programs, small sample sizes can limit the quality of information on performance of propagation methods. Bayesian updating can be used to increase information on method performance over time. We demonstrate an application to incubator testing at USGS Patuxent Wildlife Research Center. A new type of incubator was purchased for use in the whooping crane (Grus americana) propagation program, which produces birds for release. We tested the new incubator for reliability, using sandhill crane (Grus canadensis) eggs as surrogates. We determined that the new incubator should result in hatching rates no more than 5% lower than the available incubators, with 95% confidence, before it would be used to incubate whooping crane eggs. In 2007, 5 healthy chicks hatched from 12 eggs in the new incubator, and 2 hatched from 5 in an available incubator, for a median posterior difference of method, where a veterinarian determined whether eggs produced chicks that, at hatching, had no apparent health problems that would impede future release. We used the 2007 estimates as priors in the 2008 analysis. In 2008, 7 normal chicks hatched from 15 eggs in the new incubator, and 11 hatched from 15 in an available incubator, for a median posterior difference of 19%, with 95% credible interval (-8%, 44%). The increased sample size has increased our understanding of incubator performance. While additional data will be collected, at this time the new incubator does not appear adequate for use with whooping crane eggs.

  13. Development of a Bayesian method for the analysis of inertial confinement fusion experiments on the NIF

    International Nuclear Information System (INIS)

    The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters which, when combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating underlying microphysics models. The result is framed as a modified χ2 analysis which is easy to implement in existing analyses, and quite portable. We present a first application to a recent convergent ablator experiment performed at the National Ignition Facility (NIF), and investigate the effect of variations in all physical dimensions of the target (very difficult to do using other methods). We show that for well characterized targets in which dimensions vary at the 0.5% level there is little effect, but 3% variations change the results of inferences dramatically. Our Bayesian method allows particular inference results to be associated with prior errors in microphysics models; in our example, tuning the carbon opacity to match experimental data (i.e. ignoring prior knowledge) is equivalent to an assumed prior error of 400% in the tabop opacity tables. This large error is unreasonable, underlining the importance of including prior knowledge in the analysis of these experiments. (paper)

  14. Using Bayesian methods to predict climate impacts on groundwater availability and agricultural production in Punjab, India

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.

    2015-12-01

    Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.

  15. IAEA Coordinated studies on minimization of process losses in pyro-chemical partitioning methods with view to minimize environmental impact

    International Nuclear Information System (INIS)

    Partitioning and Transmutation (P and T) of Minor Actinide elements (MAs) arising out of the back-end of the fuel cycle would be one of the key-steps in any future sustainable nuclear fuel cycle. Several Member States and international organizations are recently evaluating various fuel cycle scenarios to incorporate P and T techniques as improvement of the conventional fuel cycles. Assessment of different fuel cycle strategies that incorporate P and T in comparison with 'Once-through fuel cycle' are being carried out globally and crucial future requirements are being identified. Pyro-chemical separation methods would form a critical stage of some advanced fuel cycles (e.g. so called 'double strata'), providing for separation of long lived fissionable elements, reducing in this way a potential environmental impact and proliferation risk of the back-end of the fuel-cycle. Minimization of MAs in the disposable waste fraction during the separation process can contribute considerably to an improved protection of the environment. The Agency has initiated a Coordinated Research Programme on this subject in the aim to co-ordinate R and D activities in the area of the minimization of process losses in pyro-chemical partitioning methods. The presentation focuses on the identification of development need of various pertinent areas such as: i) appraisal of minimization of process losses in separation processes and subsequent flow-sheet adjustment; ii) advanced characterization methods to characterize process losses; iii) establishing the criteria for the separation processes; iv) development of a list of critical radio-nuclides inventories; v) basic studies to compare dry partitioning (pyro-processing) with aqueous partitioning process; vi) assessment of environmental impact associated with partitioning processes; vii) identification of proliferation-resistance attributes of partitioning methods

  16. Bayesian Word Sense Induction

    OpenAIRE

    Brody, Samuel; Lapata, Mirella

    2009-01-01

    Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...

  17. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  18. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  19. The Method of Oilfield Development Risk Forecasting and Early Warning Using Revised Bayesian Network

    Directory of Open Access Journals (Sweden)

    Yihua Zhong

    2016-01-01

    Full Text Available Oilfield development aiming at crude oil production is an extremely complex process, which involves many uncertain risk factors affecting oil output. Thus, risk prediction and early warning about oilfield development may insure operating and managing oilfields efficiently to meet the oil production plan of the country and sustainable development of oilfields. However, scholars and practitioners in the all world are seldom concerned with the risk problem of oilfield block development. The early warning index system of blocks development which includes the monitoring index and planning index was refined and formulated on the basis of researching and analyzing the theory of risk forecasting and early warning as well as the oilfield development. Based on the indexes of warning situation predicted by neural network, the method dividing the interval of warning degrees was presented by “3σ” rule; and a new method about forecasting and early warning of risk was proposed by introducing neural network to Bayesian networks. Case study shows that the results obtained in this paper are right and helpful to the management of oilfield development risk.

  20. Development of Bayesian-based transformation method of Landsat imagery into pseudo-hyperspectral imagery

    Science.gov (United States)

    Hoang, Nguyen Tien; Koike, Katsuaki

    2015-10-01

    It has been generally accepted that hyperspectral remote sensing is more effective and provides greater accuracy than multispectral remote sensing in many application fields. EO-1 Hyperion, a representative hyperspectral sensor, has much more spectral bands, while Landsat data has much wider image scene and longer continuous space-based record of Earth's land. This study aims to develop a new method, Pseudo-Hyperspectral Image Synthesis Algorithm (PHISA), to transform Landsat imagery into pseudo hyperspectral imagery using the correlation between Landsat and EO-1 Hyperion data. At first Hyperion scene was precisely pre-processed and co-registered to Landsat scene, and both data were corrected for atmospheric effects. Bayesian model averaging method (BMA) was applied to select the best model from a class of several possible models. Subsequently, this best model is utilized to calculate pseudo-hyperspectral data by R programming. Based on the selection results by BMA, we transform Landsat imagery into 155 bands of pseudo-hyperspectral imagery. Most models have multiple R-squared values higher than 90%, which assures high accuracy of the models. There are no significant differences visually between the pseudo- and original data. Most bands have Pearson's coefficients coefficients < 0.93 like outliers in the data sets. In a similar manner, most Root Mean Square Error values are considerably low, smaller than 0.014. These observations strongly support that the proposed PHISA is valid for transforming Landsat data into pseudo-hyperspectral data from the outlook of statistics.

  1. Introduction to the Restoration of Astrophysical Images by Multiscale Transforms and Bayesian Methods

    Science.gov (United States)

    Bijaoui, A.

    2013-03-01

    The image restoration is today an important part of the astrophysical data analysis. The denoising and the deblurring can be efficiently performed using multiscale transforms. The multiresolution analysis constitutes the fundamental pillar for these transforms. The discrete wavelet transform is introduced from the theory of the approximation by translated functions. The continuous wavelet transform carries out a generalization of multiscale representations from translated and dilated wavelets. The à trous algorithm furnishes its discrete redundant transform. The image denoising is first considered without any hypothesis on the signal distribution, on the basis of the a contrario detection. Different softening functions are introduced. The introduction of a regularization constraint may improve the results. The application of Bayesian methods leads to an automated adaptation of the softening function to the signal distribution. The MAP principle leads to the basis pursuit, a sparse decomposition on redundant dictionaries. Nevertheless the posterior expectation minimizes, scale per scale, the quadratic error. The proposed deconvolution algorithm is based on a coupling of the wavelet denoising with an iterative inversion algorithm. The different methods are illustrated by numerical experiments on a simulated image similar to images of the deep sky. A white Gaussian stationary noise was added with three levels. In the conclusion different important connected problems are tackled.

  2. Assessment of Agricultural Water Management in Punjab, India using Bayesian Methods

    Science.gov (United States)

    Russo, T. A.; Devineni, N.; Lall, U.; Sidhu, R.

    2013-12-01

    The success of the Green Revolution in Punjab, India is threatened by the declining water table (approx. 1 m/yr). Punjab, a major agricultural supplier for the rest of India, supports irrigation with a canal system and groundwater, which is vastly over-exploited. Groundwater development in many districts is greater than 200% the annual recharge rate. The hydrologic data required to complete a mass-balance model are not available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements. Using the known values of precipitation, total canal water delivery, crop yield, and water table elevation, we solve for each unknown parameter (often a coefficient) using a Markov chain Monte Carlo (MCMC) algorithm. Results provide regional estimates of irrigation requirements and groundwater recharge rates under observed climate conditions (1972 to 2002). Model results are used to estimate future water availability and demand to help inform agriculture management decisions under projected climate conditions. We find that changing cropping patterns for the region can maintain food production while balancing groundwater pumping with natural recharge. This computational method can be applied in data-scarce regions across the world, where agricultural water management is required to resolve competition between food security and changing resource availability.

  3. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  4. From low-dimensional model selection to high-dimensional inference: tailoring Bayesian methods to biological dynamical systems

    OpenAIRE

    Hug, Sabine Carolin

    2015-01-01

    In this thesis we use differential equations for mathematically representing biological processes. For this we have to infer the associated parameters for fitting the differential equations to measurement data. If the structure of the ODE itself is uncertain, model selection methods have to be applied. We refine several existing Bayesian methods, ranging from an adaptive scheme for the computation of high-dimensional integrals to multi-chain Metropolis-Hastings algorithms for high-dimensional...

  5. Blending Bayesian and frequentist methods according to the precision of prior information with an application to hypothesis testing

    OpenAIRE

    Bickel, David R.

    2011-01-01

    The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null...

  6. Bayesian regression models outperform partial least squares methods for predicting milk components and technological properties using infrared spectral data.

    Science.gov (United States)

    Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G

    2015-11-01

    The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from

  7. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus Populations from Pup Counts

    Directory of Open Access Journals (Sweden)

    Mike Lonergan

    2011-01-01

    Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.

  8. Estimation of octanol/water partition coefficient and aqueous solubility of environmental chemicals using molecular fingerprints and machine learning methods

    Science.gov (United States)

    Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

  9. Alternative method of highway traffic safety analysis for developing countries using delphi technique and Bayesian network.

    Science.gov (United States)

    Mbakwe, Anthony C; Saka, Anthony A; Choi, Keechoo; Lee, Young-Jae

    2016-08-01

    Highway traffic accidents all over the world result in more than 1.3 million fatalities annually. An alarming number of these fatalities occurs in developing countries. There are many risk factors that are associated with frequent accidents, heavy loss of lives, and property damage in developing countries. Unfortunately, poor record keeping practices are very difficult obstacle to overcome in striving to obtain a near accurate casualty and safety data. In light of the fact that there are numerous accident causes, any attempts to curb the escalating death and injury rates in developing countries must include the identification of the primary accident causes. This paper, therefore, seeks to show that the Delphi Technique is a suitable alternative method that can be exploited in generating highway traffic accident data through which the major accident causes can be identified. In order to authenticate the technique used, Korea, a country that underwent similar problems when it was in its early stages of development in addition to the availability of excellent highway safety records in its database, is chosen and utilized for this purpose. Validation of the methodology confirms the technique is suitable for application in developing countries. Furthermore, the Delphi Technique, in combination with the Bayesian Network Model, is utilized in modeling highway traffic accidents and forecasting accident rates in the countries of research. PMID:27183516

  10. Rapid purification method for fumonisin B1 using centrifugal partition chromatography.

    Science.gov (United States)

    Szekeres, A; Lorántfy, L; Bencsik, O; Kecskeméti, A; Szécsi, Á; Mesterházy, Á; Vágvölgyi, Cs

    2013-01-01

    Fumonisin B1 (FB1) is a highly toxic mycotoxin produced by fungal strains belonging to the Fusarium genus, which can be found mainly in maize products, and is gaining interest in food safety. To produce large amounts of pure FB1, a novel purifying method was developed by using centrifugal partition chromatography, which is a prominent member of the liquid-liquid chromatographic techniques. Rice cultured with Fusarium verticillioides was extracted with a mixture of methanol/water and found to contain 0.87 mg of FB1 per gram. The crude extracts were purified on a strong anion-exchange column and then separated by using a biphasic solvent system consisting of methyl-tert-butyl-ether-acetonitrile-0.1% formic acid in water. The collected fractions were analysed by flow injection-mass spectrometry and high-performance liquid chromatography coupled with Corona-charged aerosol detector and identified by congruent retention time on high-performance liquid chromatography and mass spectrometric data. This method produced approximately 120 mg of FB1 with a purity of more than 98% from 200 g of the rice culture. The whole purification process is able to produce a large amount of pure FB1 for analytical applications or for toxicological studies. PMID:23043634

  11. A Unified Method for Inference of Tokamak Equilibria and Validation of Force-Balance Models Based on Bayesian Analysis

    CERN Document Server

    von Nessi, G T

    2012-01-01

    A new method, based on Bayesian analysis, is presented which unifies the inference of plasma equilibria parameters in a Tokamak with the ability to quantify differences between inferred equilibria and Grad-Shafranov force-balance solutions. At the heart of this technique is the new method of observation splitting, which allows multiple forward models to be associated with a single diagnostic observation. This new idea subsequently provides a means by which the the space of GS solutions can be efficiently characterised via a prior distribution. Moreover, by folding force-balance directly into one set of forward models and utilising simple Biot-Savart responses in another, the Bayesian inference of the plasma parameters itself produces an evidence (a normalisation constant of the inferred posterior distribution) which is sensitive to the relative consistency between both sets of models. This evidence can then be used to help determine the relative accuracy of the tested force-balance model across several discha...

  12. A Laplace method for under-determined Bayesian optimal experimental designs

    KAUST Repository

    Long, Quan

    2014-12-17

    In Long et al. (2013), a new method based on the Laplace approximation was developed to accelerate the estimation of the post-experimental expected information gains (Kullback–Leibler divergence) in model parameters and predictive quantities of interest in the Bayesian framework. A closed-form asymptotic approximation of the inner integral and the order of the corresponding dominant error term were obtained in the cases where the parameters are determined by the experiment. In this work, we extend that method to the general case where the model parameters cannot be determined completely by the data from the proposed experiments. We carry out the Laplace approximations in the directions orthogonal to the null space of the Jacobian matrix of the data model with respect to the parameters, so that the information gain can be reduced to an integration against the marginal density of the transformed parameters that are not determined by the experiments. Furthermore, the expected information gain can be approximated by an integration over the prior, where the integrand is a function of the posterior covariance matrix projected over the aforementioned orthogonal directions. To deal with the issue of dimensionality in a complex problem, we use either Monte Carlo sampling or sparse quadratures for the integration over the prior probability density function, depending on the regularity of the integrand function. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear under-determined test cases. They include the designs of the scalar parameter in a one dimensional cubic polynomial function with two unidentifiable parameters forming a linear manifold, and the boundary source locations for impedance tomography in a square domain, where the unknown parameter is the conductivity, which is represented as a random field.

  13. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus) Populations from Pup Counts

    OpenAIRE

    Mike Lonergan; Dave Thompson; Len Thomas; Callan Duck

    2011-01-01

    1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimat...

  14. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  15. Numerical modeling of undersea acoustics using a partition of unity method with plane waves enrichment

    Science.gov (United States)

    Hospital-Bravo, Raúl; Sarrate, Josep; Díez, Pedro

    2016-05-01

    A new 2D numerical model to predict the underwater acoustic propagation is obtained by exploring the potential of the Partition of Unity Method (PUM) enriched with plane waves. The aim of the work is to obtain sound pressure level distributions when multiple operational noise sources are present, in order to assess the acoustic impact over the marine fauna. The model takes advantage of the suitability of the PUM for solving the Helmholtz equation, especially for the practical case of large domains and medium frequencies. The seawater acoustic absorption and the acoustic reflectance of the sea surface and sea bottom are explicitly considered, and perfectly matched layers (PML) are placed at the lateral artificial boundaries to avoid spurious reflexions. The model includes semi-analytical integration rules which are adapted to highly oscillatory integrands with the aim of reducing the computational cost of the integration step. In addition, we develop a novel strategy to mitigate the ill-conditioning of the elemental and global system matrices. Specifically, we compute a low-rank approximation of the local space of solutions, which in turn reduces the number of degrees of freedom, the CPU time and the memory footprint. Numerical examples are presented to illustrate the capabilities of the model and to assess its accuracy.

  16. Bayesian meta-analytical methods to incorporate multiple surrogate endpoints in drug development process.

    Science.gov (United States)

    Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R

    2016-03-30

    A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. PMID:26530518

  17. A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard

    Directory of Open Access Journals (Sweden)

    Keith Jonathan M

    2012-07-01

    Full Text Available Abstract Background Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers. Results We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/. Conclusions The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be

  18. Influence of Incorporation Methods on Partitioning Behavior of Lipophilic Drugs into Various Phases of a Parenteral Lipid Emulsion

    OpenAIRE

    Sila-on, Warisada; Vardhanabhuti, Nontima; Ongpipattanakul, Boonsri; Kulvanich, Poj

    2008-01-01

    The purpose of this study was to investigate the effect of drug incorporation methods on the partitioning behavior of lipophilic drugs in parenteral lipid emulsions. Four lipophilic benzodiazepines, alprazolam, clonazepam, diazepam, and lorazepam, were used as model drugs. Two methods were used to incorporate drugs into an emulsion: dissolving the compound in the oil phase prior to emulsification (de novo emulsification), and directly adding a concentrated solution of drug in a solubilizer to...

  19. Estimates of European emissions of methyl chloroform using a Bayesian inversion method

    Directory of Open Access Journals (Sweden)

    M. Maione

    2014-03-01

    Full Text Available Methyl chloroform (MCF is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr−1 in 2002 to 0.32 Gg yr−1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr−1 in 2002 and 0.20 Gg yr−1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30–90° N emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003 to a maximum of 10.3% (in 2009 of the global MCF emissions.

  20. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    Science.gov (United States)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  1. A simulated annealing-based method for learning Bayesian networks from statistical data

    Czech Academy of Sciences Publication Activity Database

    Janžura, Martin; Nielsen, Jan

    2006-01-01

    Roč. 21, č. 3 (2006), s. 335-348. ISSN 0884-8173 R&D Projects: GA ČR GA201/03/0478 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian network * simulated annealing * Markov Chain Monte Carlo Subject RIV: BA - General Mathematics Impact factor: 0.429, year: 2006

  2. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  3. A Simple Bayesian Climate Index Weighting Method for Seasonal Ensemble Forecasting

    Science.gov (United States)

    Bradley, A.; Habib, M. A.; Schwartz, S. S.

    2014-12-01

    Climate information — in the form of a measure of climate state or a climate forecast — can be an important predictor of future hydrologic conditions. For instance, streamflow variability for many locations around the globe is related to large-scale atmospheric oscillations, like the El Nino Southern Oscillation (ENSO) or the Pacific/Decadal Oscillation (PDO). Furthermore, climate forecast models are growing more skillful in their predictions of future climate variables on seasonal time scales. Finding effective ways to translate this climate information into improved hydrometeorological predictions is an area of ongoing research. In ensemble streamflow forecasting, where historical weather inputs or streamflow observations are used to generate the ensemble, climate index weighting is one way to represent the influence of current climate information. Using a climate index, each forecast variable member of the ensemble is selectively weighted to reflect climate conditions at the time of the forecast. A simple Bayesian climate index weighting of ensemble forecasts is presented. The original hydrologic ensemble members define a sample of the prior distribution; the relationship between the climate index and the ensemble member forecast variable is used to estimate a likelihood function. Given an observation of the climate index at the time of the forecast, the estimated likelihood function is then used to assign weights to each ensemble member. The weighted ensemble forecast is then used to estimate the posterior distribution of the forecast variable conditioned on the climate index. The proposed approach has several advantages over traditional climate index weighting methods. The weights assigned to the ensemble members accomplish the updating of the (prior) ensemble forecast distribution based on Bayes' Theorem, so the method is theoretically sound. The method also automatically adapts to the strength of the relationship between the climate index and the

  4. Application of collocation spectral domain decomposition method to solve radiative heat transfer in 2D partitioned domains

    International Nuclear Information System (INIS)

    A collocation spectral domain decomposition method (CSDDM) based on the influence matrix technique is developed to solve radiative transfer problems within a participating medium of 2D partitioned domains. In this numerical approach, the spatial domains of interest are decomposed into rectangular sub-domains. The radiative transfer equation (RTE) in each sub-domain is angularly discretized by the discrete ordinates method (DOM) with the SRAPN quadrature scheme and then is solved by the CSDDM directly. Three test geometries that include square enclosure and two enclosures with one baffle and one centered obstruction are used to validate the accuracy of the developed method and their numerical results are compared to the data obtained by other researchers. These comparisons indicate that the CSDDM has a good accuracy for all solutions. Therefore this method can be considered as a useful approach for the solution of radiative heat transfer problems in 2D partitioned domains. - Highlights: • A collocation spectral domain decomposition method (CSDDM) for 2D RTE. • The CSDDM for 2D RTE in partitioned domains. • Continuity at the interface by the influence matrix technique. • Direct solving by CSDDM in the iterative process. • High order and accurate solution in complex geometries

  5. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  6. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    International Nuclear Information System (INIS)

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related

  7. On the partitioning method and the perturbation quantum theory - discrete spectra

    International Nuclear Information System (INIS)

    Lower and upper bounds to eigenvalues of the Schroedinger equation H Ψ = E Ψ (H = H0 + V) and the convergence condition, in Schonberg's perturbation theory, are presented. These results are obtained using the partitioning technique. It is presented for the first time a perturbation treatment obtained when the reference function in the partitioning technique is chosen to be a true eigenfunction Ψ. The convergence condition and upper and lower bounds for the true eigenvalues E are derived in this formulation. The concept of the reaction and wave operators is also discussed. (author)

  8. Genetic Properties of Some Economic Traits in Isfahan Native Fowl Using Bayesian and REML Methods

    Directory of Open Access Journals (Sweden)

    Salehinasab M

    2015-12-01

    Full Text Available The objective of the present study was to estimate heritability values for some performance and egg quality traits of native fowl in Isfahan breeding center using REML and Bayesian approaches. The records were about 51521 and 975 for performance and egg quality traits, respectively. At the first step, variance components were estimated for body weight at hatch (BW0, body weight at 8 weeks of age (BW8, weight at sexual maturity (WSM, egg yolk weight (YW, egg Haugh unit and eggshell thickness, via REML approach using ASREML software. At the second step, the same traits were analyzed via Bayesian approach using Gibbs3f90 software. In both approaches six different animal models were applied and the best model was determined using likelihood ratio test (LRT and deviance information criterion (DIC for REML and Bayesian approaches, respectively. Heritability estimates for BW0, WSM and ST were the same in both approaches. For BW0, LRT and DIC indexes confirmed that the model consisting maternal genetic, permanent environmental and direct genetic effects was significantly better than other models. For WSM, a model consisting of maternal permanent environmental effect in addition to direct genetic effect was the best. For shell thickness, the basic model consisting direct genetic effect was the best. The results for BW8, YW and Haugh unit, were different between the two approaches. The reason behind this tiny differences was that the convergence could not be achieved for some models in REML approach and thus for these traits the Bayesian approach estimated the variance components more accurately. The results indicated that ignoring maternal effects, overestimates the direct genetic variance and heritability for most of the traits. Also, the Bayesian-based software could take more variance components into account.

  9. The Selection of DNA Aptamers for Two Different Epitopes of Thrombin Was Not Due to Different Partitioning Methods

    OpenAIRE

    Wilson, Robert; Cossins, Andrew; Nicolau, Dan V.; Missailidis, Sotiris

    2013-01-01

    Nearly all aptamers identified so far for any given target molecule have been specific for the same binding site (epitope). The most notable exception to the 1 aptamer per target molecule rule is the pair of DNA aptamers that bind to different epitopes of thrombin. This communication refutes the suggestion that these aptamers exist because different partitioning methods were used when they were selected. The possibility that selection of these aptamers was biased by conflicting secondary stru...

  10. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  11. Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre

    2007-05-01

    Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.

  12. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  13. The Power of Principled Bayesian Methods in the Study of Stellar Evolution

    CERN Document Server

    von Hippel, Ted; Stenning, David C; Robinson, Elliot; Jeffery, Elizabeth; Stein, Nathan; Jefferys, William H; O'Malley, Erin

    2016-01-01

    It takes years of effort employing the best telescopes and instruments to obtain high-quality stellar photometry, astrometry, and spectroscopy. Stellar evolution models contain the experience of lifetimes of theoretical calculations and testing. Yet most astronomers fit these valuable models to these precious datasets by eye. We show that a principled Bayesian approach to fitting models to stellar data yields substantially more information over a range of stellar astrophysics. We highlight advances in determining the ages of star clusters, mass ratios of binary stars, limitations in the accuracy of stellar models, post-main-sequence mass loss, and the ages of individual white dwarfs. We also outline a number of unsolved problems that would benefit from principled Bayesian analyses.

  14. Bayesian Network Assessment Method for Civil Aviation Safety Based on Flight Delays

    OpenAIRE

    Huawei Wang; Jun Gao

    2013-01-01

    Flight delays and safety are the principal contradictions in the sound development of civil aviation. Flight delays often come up and induce civil aviation safety risk simultaneously. Based on flight delays, the random characteristics of civil aviation safety risk are analyzed. Flight delays have been deemed to a potential safety hazard. The change rules and characteristics of civil aviation safety risk based on flight delays have been analyzed. Bayesian networks (BN) have been used to build ...

  15. The evolutionary relationships and age of Homo naledi: An assessment using dated Bayesian phylogenetic methods.

    Science.gov (United States)

    Dembo, Mana; Radovčić, Davorka; Garvin, Heather M; Laird, Myra F; Schroeder, Lauren; Scott, Jill E; Brophy, Juliet; Ackermann, Rebecca R; Musiba, Chares M; de Ruiter, Darryl J; Mooers, Arne Ø; Collard, Mark

    2016-08-01

    Homo naledi is a recently discovered species of fossil hominin from South Africa. A considerable amount is already known about H. naledi but some important questions remain unanswered. Here we report a study that addressed two of them: "Where does H. naledi fit in the hominin evolutionary tree?" and "How old is it?" We used a large supermatrix of craniodental characters for both early and late hominin species and Bayesian phylogenetic techniques to carry out three analyses. First, we performed a dated Bayesian analysis to generate estimates of the evolutionary relationships of fossil hominins including H. naledi. Then we employed Bayes factor tests to compare the strength of support for hypotheses about the relationships of H. naledi suggested by the best-estimate trees. Lastly, we carried out a resampling analysis to assess the accuracy of the age estimate for H. naledi yielded by the dated Bayesian analysis. The analyses strongly supported the hypothesis that H. naledi forms a clade with the other Homo species and Australopithecus sediba. The analyses were more ambiguous regarding the position of H. naledi within the (Homo, Au. sediba) clade. A number of hypotheses were rejected, but several others were not. Based on the available craniodental data, Homo antecessor, Asian Homo erectus, Homo habilis, Homo floresiensis, Homo sapiens, and Au. sediba could all be the sister taxon of H. naledi. According to the dated Bayesian analysis, the most likely age for H. naledi is 912 ka. This age estimate was supported by the resampling analysis. Our findings have a number of implications. Most notably, they support the assignment of the new specimens to Homo, cast doubt on the claim that H. naledi is simply a variant of H. erectus, and suggest H. naledi is younger than has been previously proposed. PMID:27457542

  16. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    Science.gov (United States)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  17. An efficient analytical Bayesian method for reliability and system response updating based on Laplace and inverse first-order reliability computations

    International Nuclear Information System (INIS)

    This paper presents an efficient analytical Bayesian method for reliability and system response updating without using simulations. The method includes additional information such as measurement data via Bayesian modeling to reduce estimation uncertainties. Laplace approximation method is used to evaluate Bayesian posterior distributions analytically. An efficient algorithm based on inverse first-order reliability method is developed to evaluate system responses given a reliability index or confidence interval. Since the proposed method involves no simulations such as Monte Carlo or Markov chain Monte Carlo simulations, the overall computational efficiency improves significantly, particularly for problems with complicated performance functions. A practical fatigue crack propagation problem with experimental data, and a structural scale example are presented for methodology demonstration. The accuracy and computational efficiency of the proposed method are compared with traditional simulation-based methods.

  18. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  19. Simple Method to Determine the Partition Coefficient of Naphthenic Acid in Oil/Water

    DEFF Research Database (Denmark)

    Bitsch-Larsen, Anders; Andersen, Simon Ivar

    2008-01-01

    The partition coefficient for technical grade naphthenic acid in water/n-decane at 295 K has been determined (K-wo = 2.1 center dot 10(-4)) using a simple experimental technique with large extraction volumes (0.09 m(3) of water). Furthermore, nonequilibrium values at different pH values are...... presented. Analysis of the acid content in the oil phase was conducted by FT-IR and colormetric titration and found to be equivalent....

  20. A hybrid Bayesian-SVD based method to detect false alarms in PERSIANN precipitation estimation product using related physical parameters

    Science.gov (United States)

    Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.

    2016-07-01

    Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.

  1. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guannan [ORNL; Webster, Clayton G [ORNL; Gunzburger, Max D [ORNL

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  2. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease

    DEFF Research Database (Denmark)

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G;

    2009-01-01

    This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination......, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the...... that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors....

  3. Support agnostic Bayesian matching pursuit for block sparse signals

    KAUST Repository

    Masood, Mudassir

    2013-05-01

    A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals even when the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partition and utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE) estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator. © 2013 IEEE.

  4. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  5. A case study of an enhanced eutrophication model with stoichiometric zooplankton growth sub-model calibrated by Bayesian method.

    Science.gov (United States)

    Yang, Likun; Peng, Sen; Sun, Jingmei; Zhao, Xinhua; Li, Xia

    2016-05-01

    Urban lakes in China have suffered from severe eutrophication over the past several years, particularly those with relatively small areas and closed watersheds. Many efforts have been made to improve the understanding of eutrophication physiology with advanced mathematical models. However, several eutrophication models ignore zooplankton behavior and treat zooplankton as particles, which lead to the systematic errors. In this study, an eutrophication model was enhanced with a stoichiometric zooplankton growth sub-model that simulated the zooplankton predation process and the interplay among nitrogen, phosphorus, and oxygen cycles. A case study in which the Bayesian method was used to calibrate the enhanced eutrophication model parameters and to calculate the model simulation results was carried out in an urban lake in Tianjin, China. Finally, a water quality assessment was also conducted for eutrophication management. Our result suggests that (1) integration of the Bayesian method and the enhanced eutrophication model with a zooplankton feeding behavior sub-model can effectively depict the change in water quality and (2) the nutrients resulting from rainwater runoff laid the foundation for phytoplankton bloom. PMID:26780061

  6. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  7. The Partition of Unity Method for High-Order Finite Volume Schemes Using Radial Basis Functions Reconstruction

    Institute of Scientific and Technical Information of China (English)

    Serena Morigi; Fiorella Sgallari

    2009-01-01

    This paper introduces the use of partition of unity method for the develop-ment of a high order finite volume discretization scheme on unstructured grids for solv-ing diffusion models based on partial differential equations. The unknown function and its gradient can be accurately reconstructed using high order optimal recovery based on radial basis functions. The methodology proposed is applied to the noise removal prob-lem in functional surfaces and images. Numerical results demonstrate the effectiveness of the new numerical approach and provide experimental order of convergence.

  8. Assessing Vermont's stream health and biological integrity using artificial neural networks and Bayesian methods

    Science.gov (United States)

    Rizzo, D. M.; Fytilis, N.; Stevens, L.

    2012-12-01

    Environmental managers are increasingly required to monitor and forecast long-term effects and vulnerability of biophysical systems to human-generated stresses. Ideally, a study involving both physical and biological assessments conducted concurrently (in space and time) could provide a better understanding of the mechanisms and complex relationships. However, costs and resources associated with monitoring the complex linkages between the physical, geomorphic and habitat conditions and the biological integrity of stream reaches are prohibitive. Researchers have used classification techniques to place individual streams and rivers into a broader spatial context (hydrologic or health condition). Such efforts require environmental managers to gather multiple forms of information - quantitative, qualitative and subjective. We research and develop a novel classification tool that combines self-organizing maps with a Naïve Bayesian classifier to direct resources to stream reaches most in need. The Vermont Agency of Natural Resources has developed and adopted protocols for physical stream geomorphic and habitat assessments throughout the state of Vermont. Separate from these assessments, the Vermont Department of Environmental Conservation monitors the biological communities and the water quality in streams. Our initial hypothesis is that the geomorphic reach assessments and water quality data may be leveraged to reduce error and uncertainty associated with predictions of biological integrity and stream health. We test our hypothesis using over 2500 Vermont stream reaches (~1371 stream miles) assessed by the two agencies. In the development of this work, we combine a Naïve Bayesian classifier with a modified Kohonen Self-Organizing Map (SOM). The SOM is an unsupervised artificial neural network that autonomously analyzes inherent dataset properties using input data only. It is typically used to cluster data into similar categories when a priori classes do not exist. The

  9. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  10. Bayesian deconvolution as a method for the spectroscopy of X-rays with highly pixelated photon counting detectors

    Science.gov (United States)

    Sievers, P.; Weber, T.; Michel, T.; Klammer, J.; Büermann, L.; Anton, G.

    2012-03-01

    The energy deposition spectrum of highly pixelated photon-counting pixel detectors with a semiconductor sensor layer (e.g. silicon) differs significantly from the impinging X-ray spectrum. This is mainly due to Compton scattering, charge sharing, an energy-dependent sensor efficiency, fluorescence photons and back-scattered photons from detector parts. Therefore, the determination of the impinging X-ray spectrum from the measured distribution of the energy deposition in the detector is a non-trivial task. For the deconvolution of the measured distribution into the impinging spectrum, a set of monoenergetic response functions is needed. Those have been calculated with the Monte Carlo simulation framework ROSI, utilizing EGS4 and including all relevant physical processes in the sensor layer. We have investigated the uncertainties that spectrum reconstruction algorithms, like spectrum stripping, impose on reconstruction results. We can show that applying the Bayesian deconvolution method significantly improves the stability of the deconvolved spectrum. This results in a reduced minimum radiation flux needed for reconstruction. In this paper, we present our investigations and measurements on spectrum reconstruction for polychromatic X-ray spectra at low flux with a focus on Bayesian deconvolution.

  11. Bayesian deconvolution as a method for the spectroscopy of X-rays with highly pixelated photon counting detectors

    International Nuclear Information System (INIS)

    The energy deposition spectrum of highly pixelated photon-counting pixel detectors with a semiconductor sensor layer (e.g. silicon) differs significantly from the impinging X-ray spectrum. This is mainly due to Compton scattering, charge sharing, an energy-dependent sensor efficiency, fluorescence photons and back-scattered photons from detector parts. Therefore, the determination of the impinging X-ray spectrum from the measured distribution of the energy deposition in the detector is a non-trivial task. For the deconvolution of the measured distribution into the impinging spectrum, a set of monoenergetic response functions is needed. Those have been calculated with the Monte Carlo simulation framework ROSI, utilizing EGS4 and including all relevant physical processes in the sensor layer. We have investigated the uncertainties that spectrum reconstruction algorithms, like spectrum stripping, impose on reconstruction results. We can show that applying the Bayesian deconvolution method significantly improves the stability of the deconvolved spectrum. This results in a reduced minimum radiation flux needed for reconstruction. In this paper, we present our investigations and measurements on spectrum reconstruction for polychromatic X-ray spectra at low flux with a focus on Bayesian deconvolution.

  12. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography

    International Nuclear Information System (INIS)

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  13. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    Energy Technology Data Exchange (ETDEWEB)

    Opel, Oliver, E-mail: opel@uni.leuphana.de [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany); Palm, Wolf-Ulrich [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany); Steffen, Dieter [Lower Saxony Water Management, Coastal Defence and Nature Conservation Agency, Division Hannover-Hildesheim, An der Scharlake 39, 31135 Hildesheim (Germany); Ruck, Wolfgang K.L. [Leuphana University of Lueneburg, Institute for Ecology and Environmental Chemistry, Scharnhorststrasse 1/13, 21335 Lueneburg (Germany)

    2011-04-15

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 {mu}m is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: > New method for the comparison of heterogeneous sets of sediment samples. > Assessment of organic pollutants partitioning mechanisms in sediments. > Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  14. The Fragment Constant Method for Predicting Octanol-Air Partition Coefficients of Persistent Organic Pollutants at Different Temperatures

    Science.gov (United States)

    Li, Xuehua; Chen, Jingwen; Zhang, Li; Qiao, Xianliang; Huang, Liping

    2006-09-01

    The octanol-air partition coefficient (KOA) is a key physicochemical parameter for describing the partition of organic pollutants between air and environmental organic phases. Experimental determination of KOA is costly and time consuming, and sometimes restricted by lack of sufficiently pure chemicals. There is a need to develop a simple but accurate method to estimate KOA. In the present study, a fragment constant model based on five fragment constants and one structural correction factor, was developed for predicting logKOA at temperatures ranging from 10 to 40°C. The model was validated as successful by statistical analysis and external experimental logKOA data. Compared to other quantitative structure-property relationship methods, the present model has the advantage that it is much easier to implement. As aromatic compounds that contain C, H, O, Cl, and Br atoms, were included in the training set used to develop the model, the current fragment model applies to a wide range of chlorinated and brominated aromatic pollutants, such as chlorobenzenes, polychlorinated naphthalenes, polychlorinated biphenyls, polychlorinated dibenzo-p-dioxins and dibenzofurans, polycyclic aromatic hydrocarbons, and polybrominated diphenyl ethers, all of which are typical persistent organic pollutants. Further study is necessary to expand the utility of the method to all halogenated aliphatic and aromatic compounds.

  15. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods

    International Nuclear Information System (INIS)

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at <125 μm is best suited and can be further improved by additional organic-carbon normalization. - Research highlights: → New method for the comparison of heterogeneous sets of sediment samples. → Assessment of organic pollutants partitioning mechanisms in sediments. → Proposed method for more comparable sediment sampling. - Inside-sediment partitioning mechanisms are shown using a new mathematical approach and discussed in terms of sediment sampling and normalization.

  16. A Bayesian method for inferring transmission chains in a partially observed epidemic.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Ray, Jaideep

    2008-10-01

    We present a Bayesian approach for estimating transmission chains and rates in the Abakaliki smallpox epidemic of 1967. The epidemic affected 30 individuals in a community of 74; only the dates of appearance of symptoms were recorded. Our model assumes stochastic transmission of the infections over a social network. Distinct binomial random graphs model intra- and inter-compound social connections, while disease transmission over each link is treated as a Poisson process. Link probabilities and rate parameters are objects of inference. Dates of infection and recovery comprise the remaining unknowns. Distributions for smallpox incubation and recovery periods are obtained from historical data. Using Markov chain Monte Carlo, we explore the joint posterior distribution of the scalar parameters and provide an expected connectivity pattern for the social graph and infection pathway.

  17. Bayesian methods for the physical sciences learning from examples in astronomy and physics

    CERN Document Server

    Andreon, Stefano

    2015-01-01

    Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.

  18. Hierarchical Bayesian method for mapping biogeochemical hot spots using induced polarization imaging

    Science.gov (United States)

    Wainwright, Haruko M.; Flores Orozco, Adrian; Bücker, Matthias; Dafflon, Baptiste; Chen, Jinsong; Hubbard, Susan S.; Williams, Kenneth H.

    2016-01-01

    In floodplain environments, a naturally reduced zone (NRZ) is considered to be a common biogeochemical hot spot, having distinct microbial and geochemical characteristics. Although important for understanding their role in mediating floodplain biogeochemical processes, mapping the subsurface distribution of NRZs over the dimensions of a floodplain is challenging, as conventional wellbore data are typically spatially limited and the distribution of NRZs is heterogeneous. In this study, we present an innovative methodology for the probabilistic mapping of NRZs within a three-dimensional (3-D) subsurface domain using induced polarization imaging, which is a noninvasive geophysical technique. Measurements consist of surface geophysical surveys and drilling-recovered sediments at the U.S. Department of Energy field site near Rifle, CO (USA). Inversion of surface time domain-induced polarization (TDIP) data yielded 3-D images of the complex electrical resistivity, in terms of magnitude and phase, which are associated with mineral precipitation and other lithological properties. By extracting the TDIP data values colocated with wellbore lithological logs, we found that the NRZs have a different distribution of resistivity and polarization from the other aquifer sediments. To estimate the spatial distribution of NRZs, we developed a Bayesian hierarchical model to integrate the geophysical and wellbore data. In addition, the resistivity images were used to estimate hydrostratigraphic interfaces under the floodplain. Validation results showed that the integration of electrical imaging and wellbore data using a Bayesian hierarchical model was capable of mapping spatially heterogeneous interfaces and NRZ distributions thereby providing a minimally invasive means to parameterize a hydrobiogeochemical model of the floodplain.

  19. A Bayesian Method for Short-Term Probabilistic Forecasting of Photovoltaic Generation in Smart Grid Operation and Control

    Directory of Open Access Journals (Sweden)

    Gabriella Ferruzzi

    2013-02-01

    Full Text Available A new short-term probabilistic forecasting method is proposed to predict the probability density function of the hourly active power generated by a photovoltaic system. Firstly, the probability density function of the hourly clearness index is forecasted making use of a Bayesian auto regressive time series model; the model takes into account the dependence of the solar radiation on some meteorological variables, such as the cloud cover and humidity. Then, a Monte Carlo simulation procedure is used to evaluate the predictive probability density function of the hourly active power by applying the photovoltaic system model to the random sampling of the clearness index distribution. A numerical application demonstrates the effectiveness and advantages of the proposed forecasting method.

  20. Bayesian statistics

    OpenAIRE

    Draper, D.

    2001-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  1. Development of the method for partitioning high-level liquid waste

    International Nuclear Information System (INIS)

    A facility has been completed, in which about 1,000Ci of high-level waste can be treated for the R and D of partitioning process. A β, γ-concrete cave was remodeled to an α-cave by applying airtight lining with stainless steel inside the cave. Waste storage tanks and casette type filter cases were equipped under the cave. The apparatus consists of a denitration-concentration vessel, a filter case, mixer-settlers, pressurized ion-exchanger resin columns and 10 terminal storage tanks. These equipments are made of stainless steel except the mixer-settlers. Valves of air-pressure remote operation type is employed so as to arrange in a narrow space. Because of the limited space of the cave, the denitration vessel, pumps and others are made in multipurpose. In this facility, long-lived nuclides, i.e., transuranium elements, 90Sr, and 137Cs will be separated from the waste. (author)

  2. A model partitioning method based on dynamic decoupling for the efficient simulation of multibody systems

    International Nuclear Information System (INIS)

    The presence of different time scales in a dynamic model significantly hampers the efficiency of its simulation. In multibody systems the fact is particularly relevant, as the mentioned time scales may be very different, due, for example, to the coexistence of mechanical components controled by electronic drive units, and may also appear in conjunction with significant nonlinearities. This paper proposes a systematic technique, based on the principles of dynamic decoupling, to partition a model based on the time scales that are relevant for the particular simulation studies to be performed and as transparently as possible for the user. In accordance with said purpose, peculiar to the technique is its neat separation into two parts: a structural analysis of the model, which is general with respect to any possible simulation scenario, and a subsequent decoupled integration, which can conversely be (easily) tailored to the study at hand. Also, since the technique does not aim at reducing but rather at partitioning the model, the state space and the physical interpretation of the dynamic variables are inherently preserved. Moreover, the proposed analysis allows us to define some novel indices relative to the separability of the system, thereby extending the idea of “stiffness” in a way that is particularly keen to its use for the improvement of simulation efficiency, be the envisaged integration scheme monolithic, parallel, or even based on cosimulation. Finally, thanks to the way the analysis phase is conceived, the technique is naturally applicable to both linear and nonlinear models. The paper contains a methodological presentation of the proposed technique, which is related to alternatives available in the literature so as to evidence the peculiarities just sketched, and some application examples illustrating the achieved advantages and motivating the major design choice from an operational viewpoint

  3. A model partitioning method based on dynamic decoupling for the efficient simulation of multibody systems

    Energy Technology Data Exchange (ETDEWEB)

    Papadopoulos, Alessandro Vittorio, E-mail: alessandro.papadopoulos@control.lth.se [Lund University, Department of Automatic Control (Sweden); Leva, Alberto, E-mail: alberto.leva@polimi.it [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (Italy)

    2015-06-15

    The presence of different time scales in a dynamic model significantly hampers the efficiency of its simulation. In multibody systems the fact is particularly relevant, as the mentioned time scales may be very different, due, for example, to the coexistence of mechanical components controled by electronic drive units, and may also appear in conjunction with significant nonlinearities. This paper proposes a systematic technique, based on the principles of dynamic decoupling, to partition a model based on the time scales that are relevant for the particular simulation studies to be performed and as transparently as possible for the user. In accordance with said purpose, peculiar to the technique is its neat separation into two parts: a structural analysis of the model, which is general with respect to any possible simulation scenario, and a subsequent decoupled integration, which can conversely be (easily) tailored to the study at hand. Also, since the technique does not aim at reducing but rather at partitioning the model, the state space and the physical interpretation of the dynamic variables are inherently preserved. Moreover, the proposed analysis allows us to define some novel indices relative to the separability of the system, thereby extending the idea of “stiffness” in a way that is particularly keen to its use for the improvement of simulation efficiency, be the envisaged integration scheme monolithic, parallel, or even based on cosimulation. Finally, thanks to the way the analysis phase is conceived, the technique is naturally applicable to both linear and nonlinear models. The paper contains a methodological presentation of the proposed technique, which is related to alternatives available in the literature so as to evidence the peculiarities just sketched, and some application examples illustrating the achieved advantages and motivating the major design choice from an operational viewpoint.

  4. An estimation method for inference of gene regulatory net-work using Bayesian network with uniting of partial problems

    Directory of Open Access Journals (Sweden)

    Watanabe Yukito

    2012-01-01

    Full Text Available Abstract Background Bayesian networks (BNs have been widely used to estimate gene regulatory networks. Many BN methods have been developed to estimate networks from microarray data. However, two serious problems reduce the effectiveness of current BN methods. The first problem is that BN-based methods require huge computational time to estimate large-scale networks. The second is that the estimated network cannot have cyclic structures, even if the actual network has such structures. Results In this paper, we present a novel BN-based deterministic method with reduced computational time that allows cyclic structures. Our approach generates all the combinational triplets of genes, estimates networks of the triplets by BN, and unites the networks into a single network containing all genes. This method decreases the search space of predicting gene regulatory networks without degrading the solution accuracy compared with the greedy hill climbing (GHC method. The order of computational time is the cube of number of genes. In addition, the network estimated by our method can include cyclic structures. Conclusions We verified the effectiveness of the proposed method for all known gene regulatory networks and their expression profiles. The results demonstrate that this approach can predict regulatory networks with reduced computational time without degrading the solution accuracy compared with the GHC method.

  5. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Science.gov (United States)

    Alkhamis, Mohammad A.; Perez, Andres M.; Murtaugh, Michael P.; Wang, Xiong; Morrison, Robert B.

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  6. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak

    Directory of Open Access Journals (Sweden)

    Mohammad A. Alkhamis

    2016-02-01

    Full Text Available Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control and prevention resources. Bayesian phylodynamic models have recently been used to test research hypothesis related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV and, to the authors’ knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95 and revealed significant dispersal routes (Bayes factor > 6 of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results can’t be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic

  7. Applications of Bayesian Phylodynamic Methods in a Recent U.S. Porcine Reproductive and Respiratory Syndrome Virus Outbreak.

    Science.gov (United States)

    Alkhamis, Mohammad A; Perez, Andres M; Murtaugh, Michael P; Wang, Xiong; Morrison, Robert B

    2016-01-01

    Classical phylogenetic methods such as neighbor-joining or maximum likelihood trees, provide limited inferences about the evolution of important pathogens and ignore important evolutionary parameters and uncertainties, which in turn limits decision making related to surveillance, control, and prevention resources. Bayesian phylodynamic models have recently been used to test research hypotheses related to evolution of infectious agents. However, few studies have attempted to model the evolutionary dynamics of porcine reproductive and respiratory syndrome virus (PRRSV) and, to the authors' knowledge, no attempt has been made to use large volumes of routinely collected data, sometimes referred to as big data, in the context of animal disease surveillance. The objective of this study was to explore and discuss the applications of Bayesian phylodynamic methods for modeling the evolution and spread of a notable 1-7-4 RFLP-type PRRSV between 2014 and 2015. A convenience sample of 288 ORF5 sequences was collected from 5 swine production systems in the United States between September 2003 and March 2015. Using coalescence and discrete trait phylodynamic models, we were able to infer population growth and demographic history of the virus, identified the most likely ancestral system (root state posterior probability = 0.95) and revealed significant dispersal routes (Bayes factor > 6) of viral exchange among systems. Results indicate that currently circulating viruses are evolving rapidly, and show a higher level of relative genetic diversity over time, when compared to earlier relatives. Biological soundness of model results is supported by the finding that sow farms were responsible for PRRSV spread within the systems. Such results cannot be obtained by traditional phylogenetic methods, and therefore, our results provide a methodological framework for molecular epidemiological modeling of new PRRSV outbreaks and demonstrate the prospects of phylodynamic models to inform

  8. A Bayesian Calibration-Prediction Method for Reducing Model-Form Uncertainties with Application in RANS Simulations

    CERN Document Server

    Wu, J -L; Xiao, H

    2015-01-01

    Model-form uncertainties in complex mechanics systems are a major obstacle for predictive simulations. Reducing these uncertainties is critical for stake-holders to make risk-informed decisions based on numerical simulations. For example, Reynolds-Averaged Navier-Stokes (RANS) simulations are increasingly used in mission-critical systems involving turbulent flows. However, for many practical flows the RANS predictions have large model-form uncertainties originating from the uncertainty in the modeled Reynolds stresses. Recently, a physics-informed Bayesian framework has been proposed to quantify and reduce model-form uncertainties in RANS simulations by utilizing sparse observation data. However, in the design stage of engineering systems, measurement data are usually not available. In the present work we extend the original framework to scenarios where there are no available data on the flow to be predicted. In the proposed method, we first calibrate the model discrepancy on a related flow with available dat...

  9. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  10. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    Science.gov (United States)

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  11. An Efficient Method for Assessing Water Quality Based on Bayesian Belief Networks

    Directory of Open Access Journals (Sweden)

    Khalil Shihab

    2014-08-01

    Full Text Available A new methodo logy is developed to analyse existing water quality monitoring networks. This methodology incorporates different aspects of monitoring, including vulnerability/probability assessment, environmental health risk, the value of information, and redundancy redu ction. The work starts with a formulation of a conceptual framework for groundwater quality monitoring to represent the methodology’s context . This work presents the development of Bayesian techniques for the assessment of groundwater quality. The primary aim is to develop a predictive model and a computer system to assess and predict the impact of pollutants on the water column. The process of the analysis begins by postulating a model in light of al l available knowledge taken from relevant phenomenon. The previous knowledge as represented by the prior distribution of the model parameters is then combined with the new data through Bayes’ theorem to yield the current knowledge represented by the posterior distribution of model parameters. This process of upd ating information about the unknown model parameters is then repeated in a sequential manner as more and more new information becomes available

  12. Using Bayesian network and AHP method as a marketing approach tools in defining tourists’ preferences

    Directory of Open Access Journals (Sweden)

    Nataša Papić-Blagojević

    2012-04-01

    Full Text Available Marketing approach is associated to market conditions and achieving long term profitability of a company by satisfying consumers’ needs. This approach in tourism does not have to be related only to promoting one touristic destination, but is associated to relation between travel agency and its clients too. It considers that travel agencies adjust their offers to their clients’ needs. In that sense, it is important to analyze the behavior of tourists in the earlier periods with consideration of their preferences. Using Bayesian network, it could be graphically displayed the connection between tourists who have similar taste and relationships between them. On the other hand, the analytic hierarchy process (AHP is used to rank tourist attractions, with also relying on past experience. In this paper we examine possible applications of these two models in tourism in Serbia. The example is hypothetical, but it will serve as a base for future research. Three types of tourism are chosen as a representative in Vojvodina: Cultural, Rural and Business tourism, because they are the bright spot of touristic development in this area. Applied on these forms, analytic hierarchy process has shown its strength in predicting tourists’ preferences.

  13. Removal of radionuclides from partitioning waste solutions by adsorption and catalytic oxidation methods

    Energy Technology Data Exchange (ETDEWEB)

    Yamagishi, Isao; Yamaguchi, Isoo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kubota, Masumitsu [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2000-09-01

    Adsorption of radionuclides with inorganic ion exchangers and catalytic oxidation of a complexant were studied for the decontamination of waste solutions generated in past partitioning tests with high-level liquid waste. Granulated ferrocyanide and titanic acid were used for adsorption of Cs and Sr, respectively, from an alkaline solution resulting from direct neutralization of an acidic waste solution. Both Na and Ba inhibited adsorption of Sr but Na did not that of Cs. These exchangers adsorbed Cs and Sr at low concentration with distribution coefficients of more than 10{sup 4}ml/g from 2M Na solution of pH11. Overall decontamination factors (DFs) of Cs and total {beta} nuclides exceeded 10{sup 5} and 10{sup 3}, respectively, at the neutralization-adsorption step of actual waste solutions free from a complexant. The DF of total {alpha} nuclides was less than 10{sup 3} for a waste solution containing diethylenetriaminepentaacetic acid (DTPA). DTPA was rapidly oxidized by nitric acid in the presence of a platinum catalyst, and radionuclides were removed as precipitates by neutralization of the resultant solution. The DF of {alpha} nuclides increased to 8x10{sup 4} by addition of the oxidation step. The DFs of Sb and Co were quite low through the adsorption step. A synthesized Ti-base exchanger (PTC) could remove Sb with the DF of more than 4x10{sup 3}. (author)

  14. Nonparametric Bayesian Classification

    CERN Document Server

    Coram, M A

    2002-01-01

    A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...

  15. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  16. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  17. Development of partitioning method. Adsorption of cesium with mordenite in acidic media

    International Nuclear Information System (INIS)

    Adsorption of cesium with mordenite from a acidic solution, typically from a 0.5 mol/L nitric acid solution, was studied to examine the possibility to design a new separation scheme for partitioning of high-level liquid waste. Batch adsorption experiments showed that three mordenites examined (natural mordenite and two synthetic mordenites Zeolon 900Na and 900H) have very close behavior with the parameters of adsorption kinetics, the saturation capacity by Langmuir equation, the distribution coefficient of Cs and adsorption of other elements. In the Cs adsorption with the natural mordenite at 0.5 mol/L nitric acid, distribution coefficient was 1150 ml/g and the saturation capacity was 0.64 mmol/g. In the adsorption of Cs on column using the natural mordenite, the flow rate of the Cs solution modified only the 5% breakthrough point and gave no influence on the total capacity of Cs. Column experiments with a mixed solution of Cs, Rb, Na, Ba, Sr, Cr, Ni, Ru, Rh and Pd showed that cesium was adsorbed very selectively. Only about 4% of rubidium in a molar ratio were retained in the column. The total quantity of Cs and Rb adsorbed was 0.51 mmol/g at 0.5 mol/L nitric acid. Elution of Cs (and Rb) with 4 mol/L nitric acid was performed against the column of the loaded natural mordenite. The adsorbed Cs and Rb were well eluted, and a good mass balance was obtained between the adsorbed quantity by breakthrough curves and the quantity found in the eluate. (author)

  18. Three-dimensional conjugate heat transfer in partitioned enclosures: Determination of geometrical and thermal properties by an inverse method

    International Nuclear Information System (INIS)

    In this study, we have developed a three-dimensional numerical simplified model of coupled heat transfer (conduction, convection, and radiation) in building components with air-filled vertical cavities, closed at top and bottom. An experimental study was conducted using a modular block with removable separations that allowed assembling different patterns and which was part of an insulated separation wall mounted between a hot and a cold controlled chambers. Measured temperatures and experimental thermal resistances were found to be very close to the theoretical values, thereby validating our model. The model is then used to conduct a parametric analysis by using a design of experiments. The thermal conductivity and the emissivity of the solid material are the two parameters that most impact the thermal resistance of the enclosure. Influence of the other thermophysical properties and the pattern on the thermal resistance are also shown. Finally, an inverse method is proposed to determine the geometrical and thermophysical properties of any three-dimensional partitioned enclosure with targeted equivalent thermal resistance and equivalent volumetric specific heat. We demonstrate that this method based on a particle swarm optimization algorithm is very efficient and finds several suitable solutions differentiated by their thermophysical and geometrical properties. -- Highlights: ► Development of a model of conjugate heat transfer in 3D partitioned enclosures. ► Parametrization of geometry and thermophysical properties. ► Design of experiments indicates the most influencing parameters on thermal resistance. ► Particle swarm optimization algorithm is used as an inverse method. ► Solutions of geometries and thermophysical properties leading to a desired thermal resistance and volumetric specific heat

  19. Development of partitioning method: engineering-scale test on partitioning process. 1. Demonstration of TRU separation by solvent extraction process experimental apparatus

    International Nuclear Information System (INIS)

    A solvent extraction process experimental apparatus was designed as an engineering scale test facility for the TRU extraction process of four-group partitioning process. This apparatus has a feature that a variable bank-stage mixer-settler fixed together 2 or 4 stages banks makes it possible to examine the solvent extraction characteristics with various sets of organic and aqueous solutions. A demonstration test used a simulated solution with La and Nd showed that the variable bank-stage mixer-settler had an expected extractability of TRU from the solution, irrespective of the number of bank-stage. In addition, the leakage and discontinuity of flow patterns of organic and aqueous solutions were not found at the joint of the banks in the mixer-settler. We may conclude that this apparatus would be useful enough to examine the extraction characteristics of TRU by various organic solvents on an engineering scale. (author)

  20. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    Science.gov (United States)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  1. Partition Equilibrium

    Science.gov (United States)

    Feldman, Michal; Tennenholtz, Moshe

    We introduce partition equilibrium and study its existence in resource selection games (RSG). In partition equilibrium the agents are partitioned into coalitions, and only deviations by the prescribed coalitions are considered. This is in difference to the classical concept of strong equilibrium according to which any subset of the agents may deviate. In resource selection games, each agent selects a resource from a set of resources, and its payoff is an increasing (or non-decreasing) function of the number of agents selecting its resource. While it has been shown that strong equilibrium exists in resource selection games, these games do not possess super-strong equilibrium, in which a fruitful deviation benefits at least one deviator without hurting any other deviator, even in the case of two identical resources with increasing cost functions. Similarly, strong equilibrium does not exist for that restricted two identical resources setting when the game is played repeatedly. We prove that for any given partition there exists a super-strong equilibrium for resource selection games of identical resources with increasing cost functions; we also show similar existence results for a variety of other classes of resource selection games. For the case of repeated games we identify partitions that guarantee the existence of strong equilibrium. Together, our work introduces a natural concept, which turns out to lead to positive and applicable results in one of the basic domains studied in the literature.

  2. Bayesian Adaptive Exploration

    CERN Document Server

    Loredo, T J

    2004-01-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...

  3. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  4. A novel method for measuring the diffusion, partition and convective mass transfer coefficients of formaldehyde and VOC in building materials.

    Directory of Open Access Journals (Sweden)

    Jianyin Xiong

    Full Text Available The diffusion coefficient (D(m and material/air partition coefficient (K are two key parameters characterizing the formaldehyde and volatile organic compounds (VOC sorption behavior in building materials. By virtue of the sorption process in airtight chamber, this paper proposes a novel method to measure the two key parameters, as well as the convective mass transfer coefficient (h(m. Compared to traditional methods, it has the following merits: (1 the K, D(m and h(m can be simultaneously obtained, thus is convenient to use; (2 it is time-saving, just one sorption process in airtight chamber is required; (3 the determination of h(m is based on the formaldehyde and VOC concentration data in the test chamber rather than the generally used empirical correlations obtained from the heat and mass transfer analogy, thus is more accurate and can be regarded as a significant improvement. The present method is applied to measure the three parameters by treating the experimental data in the literature, and good results are obtained, which validates the effectiveness of the method. Our new method also provides a potential pathway for measuring h(m of semi-volatile organic compounds (SVOC by using that of VOC.

  5. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...

  6. Phycas: software for Bayesian phylogenetic analysis.

    Science.gov (United States)

    Lewis, Paul O; Holder, Mark T; Swofford, David L

    2015-05-01

    Phycas is open source, freely available Bayesian phylogenetics software written primarily in C++ but with a Python interface. Phycas specializes in Bayesian model selection for nucleotide sequence data, particularly the estimation of marginal likelihoods, central to computing Bayes Factors. Marginal likelihoods can be estimated using newer methods (Thermodynamic Integration and Generalized Steppingstone) that are more accurate than the widely used Harmonic Mean estimator. In addition, Phycas supports two posterior predictive approaches to model selection: Gelfand-Ghosh and Conditional Predictive Ordinates. The General Time Reversible family of substitution models, as well as a codon model, are available, and data can be partitioned with all parameters unlinked except tree topology and edge lengths. Phycas provides for analyses in which the prior on tree topologies allows polytomous trees as well as fully resolved trees, and provides for several choices for edge length priors, including a hierarchical model as well as the recently described compound Dirichlet prior, which helps avoid overly informative induced priors on tree length. PMID:25577605

  7. Symplectic partitioned Runge-Kutta method based on the eighth-order nearly analytic discrete operator and its wavefield simulations

    Institute of Scientific and Technical Information of China (English)

    Zhang Chao-Yuan; Ma Xiao; Yang Lei; Song Guo-Jie

    2014-01-01

    We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic wave equation. Known as the eighth-order NSPRK method, this technique uses an eighth-order accurate nearly analytic discrete (NAD) operator to discretize high-order spatial differential operators and employs a second-order SPRK method to discretize temporal derivatives. The stability criteria and numerical dispersion relations of the eighth-order NSPRK method are given by a semi-analytical method and are tested by numerical experiments. We also show the differences of the numerical dispersions between the eighth-order NSPRK method and conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG) method. The result shows that the ability of the eighth-order NSPRK method to suppress the numerical dispersion is obviously superior to that of the conventional numerical methods. In the same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 times faster than the fourth-order SPRK, and the memory requirement is only approximately 47.17%of the fourth-order NSPRK method and 49.41%of the fourth-order SPRK method, which indicates the highest computational efficiency. Modeling examples for the two-layer models such as the heterogeneous and Marmousi models show that the wavefields generated by the eighth-order NSPRK method are very clear with no visible numerical dispersion. These numerical experiments illustrate that the eighth-order NSPRK method can effectively suppress numerical dispersion when coarse grids are adopted. Therefore, this method can greatly decrease computer memory requirement and accelerate the forward modeling productivity. In general, the eighth-order NSPRK method has tremendous potential value

  8. Component-wise partitioned explicit finite element method: Benchmark tests for linear wave propagation in solids

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Park, K.C.

    Atheny : National Technical University of Athens, 2015 - (Papadrakakis, M.; Papadopoulos, V.). C 620 ISBN 978-960-99994-7-2. [International Conference on Computational Method s in Structural Dynamics and Earthquake Engineering /5./. 25.05.2015-27.05.2015, Crete] R&D Projects: GA ČR(CZ) GAP101/12/2315; GA TA ČR(CZ) TH01010772 Institutional support: RVO:61388998 Keywords : wave propagation * spurious oscillations * finite element method Subject RIV: BI - Acoustics

  9. Component-wise partitioned explicit finite element method: Nonlinear wave propagation and dynamic contact problems

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Park, K.C.

    Atheny : National Technical University of Athens, 2015 - (Papadrakakis, M.; Papadopoulos, V.). C 619-619 ISBN 978-960-99994-7-2. [International Conference on Computational Method s in Structural Dynamics and Earthquake Engineering /5./. 25.05.2015-27.05.2015, Crete] R&D Projects: GA ČR(CZ) GAP101/12/2315; GA TA ČR(CZ) TH01010772 Institutional support: RVO:61388998 Keywords : wave propagation * spurious oscillations * finite element method Subject RIV: BI - Acoustics

  10. How to combine correlated data sets -- A Bayesian hyperparameter matrix method

    CERN Document Server

    Ma, Yin-Zhe

    2013-01-01

    We construct a statistical method for performing the joint analyses of multiple correlated astronomical data sets, in which the weights of data sets are determined by their own statistical properties. This method is a generalization of the hyperparameter method constructed by \\cite{Lahav00} and \\cite{Hobson02} which was designed to combine independent data sets. The hyperparameter matrix method we present here includes the relevant weights of multiple data sets and mutual correlations, and when the hyperparameters are marginalized over, the parameters of interest are recovered. We define a new "element-wise" product, which greatly simplifies the likelihood function with hyperparameter matrix. We rigorously prove the simplified formula of the joint likelihood and show that it recovers the original hyperparameter method in the limit of no covariance between data sets. We then illustrate the method by applying a classic model of fitting a straight line to two sets of data. We show that the hyperparameter matrix ...

  11. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  12. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    International Nuclear Information System (INIS)

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed

  13. Comparing three methods of NEE-flux partitioning from the same grassland ecosystem: the 13C, 18O isotope approach and using simulated Ecosystem respiration

    Science.gov (United States)

    Siegwolf, R.; Bantelmann, E.; Saurer, M.; Eugster, W.; Buchmann, N.

    2007-12-01

    As a change in the global climate occurs with increasing temperatures, the Carbon exchange processes of terrestrial ecosystems will change as well. However, it is difficult to quantify the degree to what ecosystem respiration will change relative to the CO2 uptake by photosynthesis. To estimate the carbon sequestration potential of terrestrial vegetation cover it is essential to know both fluxes: ecosystem respiration and the carbon uptake by the vegetation cover. Therefore the net ecosystem exchange of CO2 (NEE) was measured with the eddy covariance method and separated into assimilation and respiration flux. We applied three different approaches, 1) the conventional method, applying the nighttime relationship between soil temperature and NEE for calculating the respiration flux during the day, 2) the use of stable carbon and 3) oxygen isotopes. We compared the results of the three partitioning exercises for a temperate grassland ecosystem in the pre-Alps of Switzerland for four days in June 2004. The assimilation flux derived with the conventional NEE partitioning approach, was best represented at low PAR and low temperatures, in the morning between 5 and 9 am. With increasing temperature and PAR the assimilation for the whole canopy was underestimated. For partitioning NEE via 18O approach, correlations of temperature and radiation with assimilation and respiration flux were significantly higher for the partitioning approach with 18O than for the 13C NEE partitioning. A sensitivity analysis showed the importance of an accurate determination of the equilibrium term θ between CO2 and leaf water δ18O for the NEE partitioning with 18O. For using 13C to partition NEE, the correct magnitude of the 13C fractionation and for the respiration term is essential. The analysis of the data showed that for low light and low morning temperatures the conventional method delivers reasonably good results. When the temperatures exceeded 21°C the isotope approach provided the

  14. Application of a Bayesian/generalised least-squares method to generate correlations between independent neutron fission yield data

    International Nuclear Information System (INIS)

    Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and evaluations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. (authors)

  15. Extraction of Active Regions and Coronal Holes from EUV Images Using the Unsupervised Segmentation Method in the Bayesian Framework

    CERN Document Server

    Arish, Saeid; Safari, Hossein; Amiri, Ali

    2016-01-01

    The solar corona is the origin of very dynamic events that are mostly produced in active regions (AR) and coronal holes (CH). The exact location of these large-scale features can be determined by applying image-processing approaches to extreme-ultraviolet (EUV) data. We here investigate the problem of segmentation of solar EUV images into ARs, CHs, and quiet-Sun (QS) images in a firm Bayesian way. On the basis of Bayes' rule, we need to obtain both prior and likelihood models. To find the prior model of an image, we used a Potts model in non-local mode. To construct the likelihood model, we combined a mixture of a Markov-Gauss model and non-local means. After estimating labels and hyperparameters with the Gibbs estimator, cellular learning automata were employed to determine the label of each pixel. We applied the proposed method to a Solar Dynamics Observatory/ Atmospheric Imaging Assembly (SDO/AIA) dataset recorded during 2011 and found that the mean value of the filling factor of ARs is 0.032 and 0.057 for...

  16. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    CERN Document Server

    Corsaro, Enrico

    2015-01-01

    The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars' power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Effciency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC) algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noi...

  17. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  18. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are...

  19. On using enriched cover function in the Partition-of-unity method for singular boundary-value problems

    Science.gov (United States)

    Liu, X.; Lee, C. K.; Fan, S. C.

    Amongst the various approaches of `meshless' method, the Partition-of-unity concept married with the traditional finite-element method, namely PUFEM, has emerged to be competitive in solving the boundary-value problems. It inherits most of the advantages from both techniques except that the beauty of being `meshless' vanishes. This paper presents an alternative approach to solve singular boundary-value problems. It follows the basic PUFEM procedures. The salient feature is to enhance the quality of the influence functions, either over one single nodal cover or multi-nodal-covers. In the vicinity of the singularity, available asymptotic analytical solution is employed to enrich the influence function. The beauty of present approach is that it facilitates easy replacement of the influence functions. In other words, it favors the `influence-function refinement' procedure in a bid to search for more accurate solutions. It is analogous to the `p-version refinement' in the traditional finite-element procedures. The present approach can yield very accurate solution without adopting refined meshes. As a result, the quantities around the singularity can be evaluated directly once the nodal values are solved. No additional post-processing is needed. Firstly, the formulation of the present PUFEM approach is described. Subsequently, illustrative examples show the application to three classical singular benchmark problems having various orders of singularity. Results obtained through mesh refinements, single-nodal-cover refinements or multi-nodal-cover refinements are compared.

  20. Component-wise partitioned finite element method for wave propagation and dynamic contact problems

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Červ, Jan; Park, K.C.

    Plzeň : University of West Bohemia, 2014 - (Adámek, V.). s. 55-56 ISBN 978-80-261-0429-2. [Computational Mechanics 2014 /30./. 03.11.2014-05.11.2014, Špičák] R&D Projects: GA ČR(CZ) GAP101/12/2315; GA ČR(CZ) GAP101/11/0288 Institutional support: RVO:61388998 Keywords : Stress wave propagation * Finite element method * Explicit time integrator Subject RIV: BI - Acoustics

  1. Component-wise partitioned finite element method in linear wave propagation problems: benchmark tests

    Czech Academy of Sciences Publication Activity Database

    Kolman, Radek; Cho, S.S.; Červ, Jan; Park, K.C.

    Praha : Institute of Thermomechanics AS CR, 2014 - (Zolotarev, I.; Pešek, L.), s. 31-36 ISBN 978-80-87012-54-3. [DYMAMESI 2014. Praha (CZ), 25.11.2014-26.11.2014] R&D Projects: GA ČR(CZ) GAP101/11/0288 Institutional support: RVO:61388998 Keywords : stress wave propagation * finite element method * explicit time integrator * spurious oscillations * stress discontinuities Subject RIV: JR - Other Machinery

  2. Application of Loreau & Hector's (2001) partitioning method to complex functional traits

    OpenAIRE

    Grossiord, Charlotte; Granier, André; Gessler, Arthur; Scherer-Lorenzen, Michael; Pollastrini, Martina; Bonal, Damien

    2013-01-01

    In 2001, Loreau and Hector proposed a method to calculate the effect of biodiversity on ecosystem-level properties that distinguished selection effects (SE) from complementarity effects (CE). The approach was designed and has been widely used for the study of yield in mixed-species situations taking into account the relative abundance of each species in ecosystem-level yield. However, complex functional traits commonly used to integrate ecosystem-level properties that cannot be analysed like ...

  3. The overall parameter estimation of academic performance based on Bayesian method%基于Bayes方法的学生学业成绩的总体评价

    Institute of Scientific and Technical Information of China (English)

    祝翠; 刘焕彬

    2014-01-01

    Bayes方法充分利用先验信息,并综合样本信息来进行统计推断。本文将其应用于点估计、区间估计中来推断学业成绩,进而评价学习效果和教学质量。并将经典方法与Bayes方法进行对比,得出结论:在总体平均成绩的推断问题中,Bayes方法简单实用,更有说服力。%Bayesian method makes full use of prior information,and integrates sample information for statistical inference.This paper applied it to point estimation and interval estimation for inferencing academic performance,and evaluating learning effect and teaching quality.Compared with Bayesian method and classical method,we draw the conclusion:on the question of inferencing overall average grade,Bayesian method is simple and practical, more persuasive.

  4. baySeq: Empirical Bayesian methods for identifying differential expression in sequence count data

    Directory of Open Access Journals (Sweden)

    Hardcastle Thomas J

    2010-08-01

    Full Text Available Abstract Background High throughput sequencing has become an important technology for studying expression levels in many types of genomic, and particularly transcriptomic, data. One key way of analysing such data is to look for elements of the data which display particular patterns of differential expression in order to take these forward for further analysis and validation. Results We propose a framework for defining patterns of differential expression and develop a novel algorithm, baySeq, which uses an empirical Bayes approach to detect these patterns of differential expression within a set of sequencing samples. The method assumes a negative binomial distribution for the data and derives an empirically determined prior distribution from the entire dataset. We examine the performance of the method on real and simulated data. Conclusions Our method performs at least as well, and often better, than existing methods for analyses of pairwise differential expression in both real and simulated data. When we compare methods for the analysis of data from experimental designs involving multiple sample groups, our method again shows substantial gains in performance. We believe that this approach thus represents an important step forward for the analysis of count data from sequencing experiments.

  5. A multivariate nonlinear mixed effects method for analyzing energy partitioning in growing pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Danfær, Allan Christian; Chwalibog, André;

    2010-01-01

    Simultaneous equations have become increasingly popular for describing the effects of nutrition on the utilization of ME for protein (PD) and lipid deposition (LD) in animals. The study developed a multivariate nonlinear mixed effects (MNLME) framework and compared it with an alternative method for...... instantaneous response curve of an animal to varying energy supply followed the law of diminishing returns behavior. The Michaelis-Menten function was adopted to represent a biological relationship in which the affinity constant (k) represented the sensitivity of PD to ME above maintenance. The approach...

  6. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    2010-01-01

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Based on data from the Danish passenger railway operator DSB S-tog A/S, a solution method to the train driver recovery problem (TDRP) is developed. The TDRP is formulated as a set...... branching strategy using the depth-first search of the Branch & Bound tree. The LP relaxation of the TDRP possesses strong integer properties. We present test scenarios generated from the historical real-life operations data of DSB S-tog A/S. The numerical results show that all but one tested instances...

  7. Nonlinear tracking in a diffusion process with a Bayesian filter and the finite element method

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Thygesen, Uffe Høgsbro; Madsen, Henrik

    2011-01-01

    become complicated using SMC because Monte Carlo randomness is introduced. The finite element (FE) method solves the Kolmogorov equations of the SDE numerically on a triangular unstructured mesh for which boundary conditions to the state-space are simple to incorporate. The FE approach to nonlinear state...... estimation is suited for off-line data analysis because the computed smoothed state densities, maximum a posteriori parameter estimates and state sequence are deterministic conditional on the finite element mesh and the observations. The proposed method is conceptually similar to existing point......-mass filtering methods, but is computationally more advanced and generally applicable. The performance of the FE estimators in relation to SMC and to the resolution of the spatial discretization is examined empirically through simulation. A real-data case study involving fish tracking is also analysed....

  8. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  9. Bayesian Adaptive Exploration

    Science.gov (United States)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  10. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    Science.gov (United States)

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  11. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    We present new methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. Our method is "on-line" as compared with alternative approaches to the problem which require "off-line" computations. Since it is needed to sim...

  12. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...

  13. Efficient Bayesian Phase Estimation

    Science.gov (United States)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  14. On free fermions and plane partitions

    OpenAIRE

    Foda, O.; Wheeler, M.; Zuparic, M.

    2008-01-01

    We use free fermion methods to re-derive a result of Okounkov and Reshetikhin relating charged fermions to random plane partitions, and to extend it to relate neutral fermions to strict plane partitions.

  15. Diagnostic analysis of turbulent boundary layer data by a trivariate Lagrangian partitioning method

    Energy Technology Data Exchange (ETDEWEB)

    Welsh, P.T. [Florida State Univ., Tallahassee, FL (United States)

    1994-12-31

    The rapid scientific and technological advances in meteorological theory and modeling predominantly have occurred on the large (or synoptic) scale flow characterized by the extratropical cyclone. Turbulent boundary layer flows, in contrast, have been slower in developing both theoretically and in accuracy for several reasons. There are many existing problems in boundary layer models, among them are limits to computational power available, the inability to handle countergradient fluxes, poor growth matching to real boundary layers, and inaccuracy in calculating the diffusion of scalar concentrations. Such transport errors exist within the boundary layer as well as into the free atmosphere above. This research uses a new method, which can provide insight into these problems, and ultimately improve boundary layer models. There are several potential applications of the insights provided by this approach, among them are estimation of cloud contamination of satellite remotely sensed surface parameters, improved flux and vertical transport calculations, and better understanding of the diurnal boundary layer growth process and its hysteresis cycle.

  16. Development of a Bayesian method for the analysis of inertial confinement fusion experiments on the NIF

    CERN Document Server

    Gaffney, Jim A; Sonnad, Vijay; Libby, Stephen B

    2013-01-01

    The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters that are only known with limited reliability. These parameters, combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating underlying microphysics models. The result is framed as a modified $\\chi^{2}$ analysis which is easy to implement in existing analyses, and quite portable. We present a first application to a recent convergent ablator experiment performed at the NIF, and investigate the effect of variations in all physical dimensions of the target (very difficult to do using other methods). We show that for well characterised targets in which dimensions vary at the 0.5% level there is little effect, but 3% variations change the results of i...

  17. Bayesian methods for meta-analysis of causal relationships estimated using genetic instrumental variables

    DEFF Research Database (Denmark)

    2010-01-01

    Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... an overall estimate of the causal relationship between the phenotype and the outcome, and an assessment of its heterogeneity across studies. As an example, we estimate the causal relationship of blood concentrations of C-reactive protein on fibrinogen levels using data from 11 studies. These methods provide...... a flexible framework for efficient estimation of causal relationships derived from multiple studies. Issues discussed include weak instrument bias, analysis of binary outcome data such as disease risk, missing genetic data, and the use of haplotypes....

  18. A Clustering Method of Highly Dimensional Patent Data Using Bayesian Approach

    OpenAIRE

    Sunghae Jun

    2012-01-01

    Patent data have diversely technological information of any technology field. So, many companies have managed the patent data to build their RD policy. Patent analysis is an approach to the patent management. Also, patent analysis is an important tool for technology forecasting. Patent clustering is one of the works for patent analysis. In this paper, we propose an efficient clustering method of patent documents. Generally, patent data are consisted of text document. The patent documents have...

  19. Bayesian approach to change point detection of unemployment rate via MCMC methods

    Czech Academy of Sciences Publication Activity Database

    Reisnerová, Soňa

    Plzeň : University of West Bohemia in Pilsen, 2006 - (Lukáš, L.), s. 447-452 ISBN 978-80-7043-480-2. [Mathematical Methods in Economics 2006. Plzeň (CZ), 13.09.2006-15.09.2006] R&D Projects: GA AV ČR IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : Change point * unemployment rate * MCMC * poisson model Subject RIV: BB - Applied Statistics, Operational Research

  20. The index-flood method in a Bayesian perspective : sensitivity to regional heterogeneity

    OpenAIRE

    Viglione, A.; Gaume, Eric; Gall, L.; Bloschl, G.; Szolgay, J.

    2009-01-01

    In regional flood frequency analysis, the commonly used index-flood method assumes that the frequency distribution of flood peaks for different sites in an homogeneous region is the same, provided that the discharge series are rescaled using a site-specific scale factor. This scale factor (index flood) is subsequently related to catchment characteristics to permit estimation of flood quantiles at ungauged sites within the region. In this work, a Monte Carlo Markov Chain (MCMC) approach is use...

  1. Development of a Bayesian method for the analysis of inertial confinement fusion experiments on the NIF

    OpenAIRE

    Gaffney, Jim A; Clark, Dan; Sonnad, Vijay; Libby, Stephen B.

    2013-01-01

    The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters that are only known with limited reliability. These parameters, combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating un...

  2. Bayesian methods to overcome the winner’s curse in genetic studies

    OpenAIRE

    Xu, Lizhen; Craiu, Radu V.; Sun, Lei

    2011-01-01

    Parameter estimates for associated genetic variants, report ed in the initial discovery samples, are often grossly inflated compared to the values observed in the follow-up replication samples. This type of bias is a consequence of the sequential procedure in which the estimated effect of an associated genetic marker must first pass a stringent significance threshold. We propose a hierarchical Bayes method in which a spike-and-slab prior is used to account for the possibility that the signifi...

  3. A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries

    International Nuclear Information System (INIS)

    We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a 106 iteration chain from ∼109 to ∼106. The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC. (paper)

  4. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889

  5. How about a Bayesian M/EEG imaging method correcting for incomplete spatio-temporal priors

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke; Wipf, David; Hansen, Lars Kai; Nagarajan, Srikantan S.

    previous spatio-temporal inverse M/EEG models, the proposed model benefits of consisting of two source terms, namely, a spatio-temporal pattern term limiting the source configuration to a spatio-temporal subspace and a source correcting term to pick up source activity not covered by the spatio......B AUC is, 0.985 (sAquavit) and 0.857 (Bolstad et al., 2009). Our results demonstrate that the sAquavit model is capable in balancing spatio-temporal prior guidance and source correction estimation to obtain superior estimates relative to current inverse methods....

  6. A Bayesian multilocus association method: allowing for higher-order interaction in association studies

    DEFF Research Database (Denmark)

    Albrechtsen, Anders; Castella, Sofie; Andersen, Gitte;

    2007-01-01

    For most common diseases with heritable components, not a single or a few single-nucleotide polymorphisms (SNPs) explain most of the variance for these disorders. Instead, much of the variance may be caused by interactions (epistasis) among multiple SNPs or interactions with environmental...... conditions. We present a new powerful statistical model for analyzing and interpreting genomic data that influence multifactorial phenotypic traits with a complex and likely polygenic inheritance. The new method is based on Markov chain Monte Carlo (MCMC) and allows for identification of sets of SNPs and...

  7. Partition density functional theory

    Science.gov (United States)

    Nafziger, Jonathan

    Partition density functional theory (PDFT) is a method for dividing a molecular electronic structure calculation into fragment calculations. The molecular density and energy corresponding to Kohn Sham density-functional theory (KS-DFT) may be exactly recovered from these fragments. Each fragment acts as an isolated system except for the influence of a global one-body 'partition' potential which deforms the fragment densities. In this work, the developments of PDFT are put into the context of other fragment-based density functional methods. We developed three numerical implementations of PDFT: One within the NWChem computational chemistry package using basis sets, and the other two developed from scratch using real-space grids. It is shown that all three of these programs can exactly reproduce a KS-DFT calculation via fragment calculations. The first of our in-house codes handles non-interacting electrons in arbitrary one-dimensional potentials with any number of fragments. This code is used to explore how the exact partition potential changes for different partitionings of the same system and also to study features which determine which systems yield non-integer PDFT occupations and which systems are locked into integer PDFT occupations. The second in-house code, CADMium, performs real-space calculations of diatomic molecules. Features of the exact partition potential are studied for a variety of cases and an analytical formula determining singularities in the partition potential is derived. We introduce an approximation for the non-additive kinetic energy and show how this quantity can be computed exactly. Finally a PDFT functional is developed to address the issues of static correlation and delocalization errors in approximations within DFT. The functional is applied to the dissociation of H2 + and H2.

  8. Generating Primes Using Partitions

    OpenAIRE

    Pittu, Ganesh Reddy

    2015-01-01

    This paper presents a new technique of generating large prime numbers using a smaller one by employing Goldbach partitions. Experiments are presented showing how this method produces candidate prime numbers that are subsequently tested using either Miller Rabin or AKS primality tests.

  9. Monitoring county-level chlamydia incidence in Texas, 2004 – 2005: application of empirical Bayesian smoothing and Exploratory Spatial Data Analysis (ESDA methods

    Directory of Open Access Journals (Sweden)

    Owens Chantelle J

    2009-02-01

    Full Text Available Abstract Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races from the National Electronic Telecommunications System for Surveillance (NETSS for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p 300 cases per 100,000 residents than its contiguous neighbors (195 or less in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379. The relative change in smoothed chlamydia rates in Newton county was significantly (p Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time.

  10. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  11. Empirical Bayesian Method for the Estimation of Literacy Rate at Sub-district Level Case Study: Sumenep District of East Java Province

    Directory of Open Access Journals (Sweden)

    A.Tuti Rumiati

    2012-02-01

    Full Text Available This paper discusses Bayesian Method of Small Area Estimation (SAE based on Binomial response variable. SAE method being developed to estimate parameter in small area due to insufficiency of sample. The case study is literacy rate estimation at sub-district level in Sumenep district, East Java Province. Literacy rate is measured by proportion of people who are able to read and write, from the population of 10 year-old or more. In the case study we used Social Economic Survey (Susenasdata collected by BPS. The SAE approach was applied since the Susenas data is not representative enough to estimate the parameters at sub-district level because it’s designed to estimate parameters in regional area (in scope of a district/city at minimum. In this research, the response variable being used was logit function trasformation of pi (the parameter of Binomial distribution. We applied direct and indirect approach for parameter estimation, both using Empirical Bayes approach. For direct estimation we used prior distribution of Beta distribution and Normal prior distribution for logit function (pi and to estimate parameter by using numerical method, i.e integration Monte Carlo. For indirect approach, we used auxiliary variables which are combinations of sex and age (which is divided into five categories. Penalized Quasi Likelihood (PQL was used to get parameter estimation of SAE model and Restricted Maximum Likelihood method (REML for MSE estimation. Instead of Bayesian approach, we are also conducting direct estimation using classical approach in order to evaluate the quality of the estimators. This research gives some findings, those are: Bayesian approach for SAE model gives the best estimation because having the lowest MSE value compares to the other methods. For the direct estimation, Bayesian approach using Beta and logit Normal prior distribution give a very similar result to the direct estimation with classical approach since the weight of is too

  12. Partition expanders

    Czech Academy of Sciences Publication Activity Database

    Gavinsky, Dmitry; Pudlák, Pavel

    Dagstuhl: Schloss Dagstuhl, Leibniz-Zentrum für Informatik, 2014 - (Mayr, E.; Portier, N.), s. 325-336. (Leibniz International Proceedings in Informatics. 25). ISBN 978-3-939897-65-1. ISSN 1868-8969. [International Symposium on Theoretical Aspects of Computer Science (STACS 2014), /31./. Lyon (FR), 05.03.2014-08.03.2014] R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : partitions * expanders * random graphs Subject RIV: BA - General Mathematics http://drops.dagstuhl.de/opus/volltexte/2014/4468/

  13. Portfolio Allocation for Bayesian Optimization

    OpenAIRE

    Brochu, Eric; Hoffman, Matthew W.; De Freitas, Nando

    2010-01-01

    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several differen...

  14. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  15. Development of partitioning method

    International Nuclear Information System (INIS)

    Spent fuels from nuclear power stations contain many useful elements, which can be utilized as heat and irradiation sources, radioisotope, elemental resource, etc. Their recovery from spent fuel and effective uses have the advantages in not only converting the radioactive waste to beneficial resources but also promoting rationalization of the management and disposal of the radioactive wastes. In present study, published literature related to recovery and utilization of useful elements in spent fuel was mainly surveyed, present states and trends in their research and development were analyzed, and their future prospects were conjectured. Research and development on recovery and utilization of useful elements are being continued mainly in USA, Europe and Japan. A transportable food irradiator with Cs-137 and an electric power source with Sr-90 for remote weather station are typical examples in major past applications. However, research and development on recovery and utilization are not so much active and the future efforts should be expected hereafter. Present study was conducted under the auspices of the Science and Technology Agency of Japan. (author)

  16. Matrix partitions of digraphs

    OpenAIRE

    Schell, David George

    2008-01-01

    The matrix partition problem has been of recent interest in graph theory. Matrix partitions generalize the study of graph colourings and homomorphisms. Many well-known graph partition problems can be stated in terms of matrices. For example skew partitions, split partitions, homogeneous sets, clique-cutsets, stable-cutsets and k-colourings can all be modeled as matrix partitions. For each matrix partition problem there is an equivalent trigraph H-colouring problem. We show a ‘dichotomy’ for t...

  17. Bayesian Decision-theoretic Methods for Parameter Ensembles with Application to Epidemiology

    Science.gov (United States)

    Gunterman, Haluna Penelope Frances

    and water-uptake behavior of CLs. Isolated CLs were made in-house and commercially and tested for their PC-S response. CLs have the propensity to be highly hydrophilic and require capillary pressures as low as -80 kPa to eject water. The presence of Pt or surface cracks increases hydrophilicity. These findings suggest that saturation in CLs, especially cracked CLs, may exacerbate poor transport. Lastly, this work includes early-stage development of a limiting-current measurement that can be used to calculate effective transport properties as a function of saturation. Results indicate that the method is valid, and different DM have higher transport depending on the operating condition. The technique is yet in a formative stage, and this work includes advice and recommendations for operation and design improvements.

  18. Bayesian Nonparametric Graph Clustering

    OpenAIRE

    Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran

    2015-01-01

    We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...

  19. Bayesian Neural Word Embedding

    OpenAIRE

    Barkan, Oren

    2016-01-01

    Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-gram (SG) with negative sampling, known also as Word2Vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm that can be beneficial to general item similarity tasks as well. The algorithm relies on a Variational Bayes solution for the SG objective and a detailed step by ...

  20. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    Energy Technology Data Exchange (ETDEWEB)

    Pask, J E; Sukumar, N; Guney, M; Hu, W

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is

  1. Sticking to (first) principles: quantum molecular dynamics and Bayesian probabilistic methods to simulate aquatic pollutant absorption spectra.

    Science.gov (United States)

    Trerayapiwat, Kasidet; Ricke, Nathan; Cohen, Peter; Poblete, Alex; Rudel, Holly; Eustis, Soren N

    2016-08-10

    This work explores the relationship between theoretically predicted excitation energies and experimental molar absorption spectra as they pertain to environmental aquatic photochemistry. An overview of pertinent Quantum Chemical descriptions of sunlight-driven electronic transitions in organic pollutants is presented. Second, a combined molecular dynamics (MD), time-dependent density functional theory (TD-DFT) analysis of the ultraviolet to visible (UV-Vis) absorption spectra of six model organic compounds is presented alongside accurate experimental data. The functional relationship between the experimentally observed molar absorption spectrum and the discrete quantum transitions is examined. A rigorous comparison of the accuracy of the theoretical transition energies (ΔES0→Sn) and oscillator strength (fS0→Sn) is afforded by the probabilistic convolution and deconvolution procedure described. This method of deconvolution of experimental spectra using a Gaussian Mixture Model combined with Bayesian Information Criteria (BIC) to determine the mean (μ) and standard deviation (σ) as well as the number of observed singlet to singlet transition energy state distributions. This procedure allows a direct comparison of the one-electron (quantum) transitions that are the result of quantum chemical calculations and the ensemble of non-adiabatic quantum states that produce the macroscopic effect of a molar absorption spectrum. Poor agreement between the vertical excitation energies produced from TD-DFT calculations with five different functionals (CAM-B3LYP, PBE0, M06-2X, BP86, and LC-BLYP) suggest a failure of the theory to capture the low energy, environmentally important, electronic transitions in our model organic pollutants. However, the method of explicit-solvation of the organic solute using the quantum Effective Fragment Potential (EFP) in a density functional molecular dynamics trajectory simulation shows promise as a robust model of the hydrated organic

  2. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    Science.gov (United States)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  3. Single-cycle method for partitioning of trivalent actinides using completely incinerable reagents from nitric acid medium

    Energy Technology Data Exchange (ETDEWEB)

    Ravi, Jammu; Venkatesan, K.A.; Antony, M.P.; Srinivasan, T.G.; Rao, P.R. Vasudeva [Indira Gandhi Centre for Atomic Research, Kalpakkam (India). Fuel Chemistry Div.

    2014-10-01

    A new approach, namely 'Single-cycle method for partitioning of Minor Actinides using completely incinerable ReagenTs' (SMART), has been explored for the separation of Am(III) from Eu(III) present in nitric acid medium. The extraction behavior of Am(III) and Eu(III) in a solution of an unsymmetrical diglycolamide, N,N,-didodecyl-N',N'-dioctyl-3-oxapentane-1,5-diamide (D{sup 3}DODGA), and an acidic extractant, N,N-di-2-ethylhexyl diglycolamic acid (HDEHDGA), in n-dodecane was studied. The distribution ratio of both these metal ions in D{sup 3}DODGA-HDEHDGA/n-dodecane initially decreased with increase in the concentration of nitric acid reached a minimum at 0.1 M nitric acid followed by increase. Synergic extraction of Am(III) and Eu(III) was observed at nitric acid concentrations above 0.1 M and antagonism at lower acidities. Contrasting behavior observed at different acidities was probed by the slope analysis of the extraction data. The study revealed the involvement of both D{sup 3}DODGA and HDEHDGA during synergism and increased participation of HDEHDGA during antagonism. The stripping behavior of Am(III) and Eu(III) from the loaded organic phase was studied as a function of nitric acid, DTPA, and citric acid concentrations. The conditions needed for the mutual separation of Am(III) and Eu(III) from the loaded organic phase were optimized. Our studies revealed the possibility of separating trivalent actinides from HLLW using these completely incinerable reagents. (orig.)

  4. Single-cycle method for partitioning of trivalent actinides using completely incinerable reagents from nitric acid medium

    International Nuclear Information System (INIS)

    A new approach, namely 'Single-cycle method for partitioning of Minor Actinides using completely incinerable ReagenTs' (SMART), has been explored for the separation of Am(III) from Eu(III) present in nitric acid medium. The extraction behavior of Am(III) and Eu(III) in a solution of an unsymmetrical diglycolamide, N,N,-didodecyl-N',N'-dioctyl-3-oxapentane-1,5-diamide (D3DODGA), and an acidic extractant, N,N-di-2-ethylhexyl diglycolamic acid (HDEHDGA), in n-dodecane was studied. The distribution ratio of both these metal ions in D3DODGA-HDEHDGA/n-dodecane initially decreased with increase in the concentration of nitric acid reached a minimum at 0.1 M nitric acid followed by increase. Synergic extraction of Am(III) and Eu(III) was observed at nitric acid concentrations above 0.1 M and antagonism at lower acidities. Contrasting behavior observed at different acidities was probed by the slope analysis of the extraction data. The study revealed the involvement of both D3DODGA and HDEHDGA during synergism and increased participation of HDEHDGA during antagonism. The stripping behavior of Am(III) and Eu(III) from the loaded organic phase was studied as a function of nitric acid, DTPA, and citric acid concentrations. The conditions needed for the mutual separation of Am(III) and Eu(III) from the loaded organic phase were optimized. Our studies revealed the possibility of separating trivalent actinides from HLLW using these completely incinerable reagents. (orig.)

  5. Empirical Bayesian Method for the Estimation of Literacy Rate at Sub-district Level Case Study: Sumenep District of East Java Province

    OpenAIRE

    A.Tuti Rumiati; Khairil Anwar Notodiputro; Kusman Sadik; Wayan Mangku, I.

    2012-01-01

    This paper discusses Bayesian Method of Small Area Estimation (SAE) based on Binomial response variable. SAE method being developed to estimate parameter in small area due to insufficiency of sample. The case study is literacy rate estimation at sub-district level in Sumenep district, East Java Province. Literacy rate is measured by proportion of people who are able to read and write, from the population of 10 year-old or more. In the case study we used Social Economic Survey (Susenas)data co...

  6. Bayesian Magic in Asteroseismology

    Science.gov (United States)

    Kallinger, T.

    2015-09-01

    Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.

  7. Bayesian Attractor Learning

    Science.gov (United States)

    Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory

    2016-04-01

    Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.

  8. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    Science.gov (United States)

    Mielke, Steven L.; Truhlar, Donald G.

    2015-01-01

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P-6) to O(P-12), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ˜1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  9. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational–rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    Energy Technology Data Exchange (ETDEWEB)

    Mielke, Steven L., E-mail: slmielke@gmail.com, E-mail: truhlar@umn.edu; Truhlar, Donald G., E-mail: slmielke@gmail.com, E-mail: truhlar@umn.edu [Department of Chemistry, Chemical Theory Center, and Supercomputing Institute, University of Minnesota, 207 Pleasant St. S.E., Minneapolis, Minnesota 55455-0431 (United States)

    2015-01-28

    We present an improved version of our “path-by-path” enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P{sup −6}) to O(P{sup −12}), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational–rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan–Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300–3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  10. A new high-throughput method utilizing porous silica-based nano-composites for the determination of partition coefficients of drug candidates.

    Science.gov (United States)

    Yu, Chih H; Tam, Kin; Tsang, Shik C

    2011-09-01

    We show that highly porous silica-based nanoparticles prepared via micro-emulsion and sol-gel techniques are stable colloids in aqueous solution. By incorporating a magnetic core into the porous silica nano-composite, it is found that the material can be rapidly separated (precipitated) upon exposure to an external magnetic field. Alternatively, the porous silica nanoparticles without magnetic cores can be equally separated from solution by applying a high-speed centrifugation. Using these silica-based nanostructures a new high-throughput method for the determination of partition coefficient for water/n-octanol is hereby described. First, a tiny quantity of n-octanol phase is pre-absorbed in the porous silica nano-composite colloids, which allows an establishment of interface at nano-scale between the adsorbed n-octanol with the bulk aqueous phase. Organic compounds added to the mixture can therefore undergo a rapid partition between the two phases. The concentration of drug compound in the supernatant in a small vial can be determined by UV-visible absorption spectroscopy. With the adaptation of a robotic liquid handler, a high-throughput technology for the determination of partition coefficients of drug candidates can be employed for drug screening in the industry based on these nano-separation skills. The experimental results clearly suggest that this new method can provide partition coefficient values of potential drug candidates comparable to the conventional shake-flask method but requires much shorter analytical time and lesser quantity of chemicals. PMID:21780284

  11. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  12. Bootstrap clustering for graph partitioning

    OpenAIRE

    Gambette, Philippe; Guénoche, Alain

    2011-01-01

    Given a simple undirected weighted or unweighted graph, we try to cluster the vertex set into communities and also to quantify the robustness of these clusters. For that task, we propose a new method, called bootstrap clustering which consists in (i) defining a new clustering algorithm for graphs, (ii) building a set of graphs similar to the initial one, (iii) applying the clustering method to each of them, making a profile (set) of partitions, (iv) computing a consensus partition for this pr...

  13. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  14. The effect of different evapotranspiration methods on portraying soil water dynamics and ET partitioning in a semi-arid environment in Northwest China

    Science.gov (United States)

    Yu, Lianyu; Zeng, Yijian; Su, Zhongbo; Cai, Huanjie; Zheng, Zhen

    2016-03-01

    Different methods for assessing evapotranspiration (ET) can significantly affect the performance of land surface models in portraying soil water dynamics and ET partitioning. An accurate understanding of the impact a method has is crucial to determining the effectiveness of an irrigation scheme. Two ET methods are discussed: one is based on reference crop evapotranspiration (ET0) theory, uses leaf area index (LAI) for partitioning into soil evaporation and transpiration, and is denoted as the ETind method; the other is a one-step calculation of actual soil evaporation and potential transpiration by incorporating canopy minimum resistance and actual soil resistance into the Penman-Monteith model, and is denoted as the ETdir method. In this study, a soil water model, considering the coupled transfer of water, vapor, and heat in the soil, was used to investigate how different ET methods could affect the calculation of the soil water dynamics and ET partitioning in a crop field. Results indicate that for two different ET methods this model varied concerning the simulation of soil water content and crop evapotranspiration components, but the simulation of soil temperature agreed well with lysimeter observations, considering aerodynamic and surface resistance terms improved the ETdir method regarding simulating soil evaporation, especially after irrigation. Furthermore, the results of different crop growth scenarios indicate that the uncertainty in LAI played an important role in estimating the relative transpiration and evaporation fraction. The impact of maximum rooting depth and root growth rate on calculating ET components might increase in drying soil. The influence of maximum rooting depth was larger late in the growing season, while the influence of root growth rate dominated early in the growing season.

  15. Kullback-Leibler Divergence Approach to Partitioned Update Kalman Filter

    OpenAIRE

    Raitoharju, Matti; García-Fernández, Ángel F.; Piché, Robert

    2016-01-01

    Kalman filtering is a widely used framework for Bayesian estimation. The partitioned update Kalman filter applies a Kalman filter update in parts so that the most linear parts of measurements are applied first. In this paper, we generalize partitioned update Kalman filter, which requires the use oft the second order extended Kalman filter, so that it can be used with any Kalman filter extension. To do so, we use a Kullback-Leibler divergence approach to measure the nonlinearity of the measure...

  16. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  17. The state of the art of partitioning technology for long-lived actinides and fission products by solvent extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Ozawa, M.; Koma, Y.; Nomura, K.; Sano, Y. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-04-01

    Japan launched an ambitious long-term program on partitioning and transmutation (P-T) called OMEGA in 1988. Under the program PNC has being carried out its R and D activities. A check and review process based on progress made was conducted in fall 1998 by STA (Science and Technology Agency). This report was prepared to submit the state of R and D activities on partitioning by solvent extraction program in PNC for seven years (1990-1997) to STA. The paper described the progress, the results and future plans on (a) improved PUREX process for the extraction of Np with Pu by valence control, (b) improved TRUEX process for the extraction of minor Actinides and (c) other potential solvents for the extraction of other long-lived FPs from spent fuels. (H. Itami)

  18. A SAS Interface for Bayesian Analysis with WinBUGS

    Science.gov (United States)

    Zhang, Zhiyong; McArdle, John J.; Wang, Lijuan; Hamagami, Fumiaki

    2008-01-01

    Bayesian methods are becoming very popular despite some practical difficulties in implementation. To assist in the practical application of Bayesian methods, we show how to implement Bayesian analysis with WinBUGS as part of a standard set of SAS routines. This implementation procedure is first illustrated by fitting a multiple regression model…

  19. Belief propagation for graph partitioning

    International Nuclear Information System (INIS)

    We study the belief-propagation algorithm for the graph bi-partitioning problem, i.e. the ground state of the ferromagnetic Ising model at a fixed magnetization. Application of a message passing scheme to a model with a fixed global parameter is not banal and we show that the magnetization can in fact be fixed in a local way within the belief-propagation equations. Our method provides the full phase diagram of the bi-partitioning problem on random graphs, as well as an efficient heuristic solver that we anticipate to be useful in a wide range of application of the partitioning problem.

  20. Present status of partitioning developments

    International Nuclear Information System (INIS)

    Evolution and development of the concept of partitioning of high-level liquid wastes (HLLW) in nuclear fuel reprocessing are reviewed historically from the early phase of separating useful radioisotopes from HLLW to the recent phase of eliminating hazardous nuclides such as transuranium elements for safe waste disposal. Since the criteria in determining the nuclides for elimination and the respective decontamination factors are important in the strategy of partitioning, current views on the criteria are summarized. As elimination of the transuranium is most significant in the partitioning, various methods available of separating them from fission products are evaluated. (auth.)

  1. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  2. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  3. The Impact of Variable Degrees of Freedom and Scale Parameters in Bayesian Methods for Genomic Prediction in Chinese Simmental Beef Cattle.

    Science.gov (United States)

    Zhu, Bo; Zhu, Miao; Jiang, Jicai; Niu, Hong; Wang, Yanhui; Wu, Yang; Xu, Lingyang; Chen, Yan; Zhang, Lupei; Gao, Xue; Gao, Huijiang; Liu, Jianfeng; Li, Junya

    2016-01-01

    Three conventional Bayesian approaches (BayesA, BayesB and BayesCπ) have been demonstrated to be powerful in predicting genomic merit for complex traits in livestock. A priori, these Bayesian models assume that the non-zero SNP effects (marginally) follow a t-distribution depending on two fixed hyperparameters, degrees of freedom and scale parameters. In this study, we performed genomic prediction in Chinese Simmental beef cattle and treated degrees of freedom and scale parameters as unknown with inappropriate priors. Furthermore, we compared the modified methods (BayesFA, BayesFB and BayesFCπ) with their corresponding counterparts using simulation datasets. We found that the modified methods with distribution assumed to the two hyperparameters were beneficial for improving the predictive accuracy. Our results showed that the predictive accuracies of the modified methods were slightly higher than those of their counterparts especially for traits with low heritability and a small number of QTLs. Moreover, cross-validation analysis for three traits, namely carcass weight, live weight and tenderloin weight, in 1136 Simmental beef cattle suggested that predictive accuracy of BayesFCπ noticeably outperformed BayesCπ with the highest increase (3.8%) for live weight using the cohort masking cross-validation. PMID:27139889

  4. The Impact of Variable Degrees of Freedom and Scale Parameters in Bayesian Methods for Genomic Prediction in Chinese Simmental Beef Cattle.

    Directory of Open Access Journals (Sweden)

    Bo Zhu

    Full Text Available Three conventional Bayesian approaches (BayesA, BayesB and BayesCπ have been demonstrated to be powerful in predicting genomic merit for complex traits in livestock. A priori, these Bayesian models assume that the non-zero SNP effects (marginally follow a t-distribution depending on two fixed hyperparameters, degrees of freedom and scale parameters. In this study, we performed genomic prediction in Chinese Simmental beef cattle and treated degrees of freedom and scale parameters as unknown with inappropriate priors. Furthermore, we compared the modified methods (BayesFA, BayesFB and BayesFCπ with their corresponding counterparts using simulation datasets. We found that the modified methods with distribution assumed to the two hyperparameters were beneficial for improving the predictive accuracy. Our results showed that the predictive accuracies of the modified methods were slightly higher than those of their counterparts especially for traits with low heritability and a small number of QTLs. Moreover, cross-validation analysis for three traits, namely carcass weight, live weight and tenderloin weight, in 1136 Simmental beef cattle suggested that predictive accuracy of BayesFCπ noticeably outperformed BayesCπ with the highest increase (3.8% for live weight using the cohort masking cross-validation.

  5. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    Science.gov (United States)

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure. PMID:26832875

  6. New video smoke detection method using Bayesian decision%一种集成贝叶斯决策的视频烟雾检测新方法

    Institute of Scientific and Technical Information of China (English)

    谢振平; 王涛; 刘渊

    2014-01-01

    The Bayesian decision method is studied to further improve the performance of detecting video smoke using Adaptive Neuro-Fuzzy Inference System(ANFIS). Smoke features are extracted from video sequences. The subtractive clustering and hybrid learning rules are used to train ANFIS. Detection outputs are determined by performing proposed Bayesian decision rules on the outputs of ANFIS. Experimental results show that the detection performance of ANFIS is better than that of other smoke detection algorithms, and the introduction of minimum risk-based Bayesian decision rules further increases the detection rate and decreases the false alarm rate, which is more valuable for practical applications.%研究将贝叶斯决策应用于自适应神经-模糊推理系统(ANFIS)的视频烟雾检测系统。提取视频烟雾特征,通过减法聚类和混合学习算法,确定并优化得到ANFIS实例,引入贝叶斯决策对ANFIS输出进行检测判别。仿真实验表明,ANFIS比其他烟雾检测算法具备更好的检测性能,而基于最小风险的贝叶斯决策可进一步提高检测率和降低虚警率,能更好地满足实际应用的需求。

  7. Bayesian Statistics for Biological Data: Pedigree Analysis

    Science.gov (United States)

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  8. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  9. Metal-silicate Partitioning at High Pressure and Temperature: Experimental Methods and a Protocol to Suppress Highly Siderophile Element Inclusions.

    Science.gov (United States)

    Bennett, Neil R; Brenan, James M; Fei, Yingwei

    2015-01-01

    Estimates of the primitive upper mantle (PUM) composition reveal a depletion in many of the siderophile (iron-loving) elements, thought to result from their extraction to the core during terrestrial accretion. Experiments to investigate the partitioning of these elements between metal and silicate melts suggest that the PUM composition is best matched if metal-silicate equilibrium occurred at high pressures and temperatures, in a deep magma ocean environment. The behavior of the most highly siderophile elements (HSEs) during this process however, has remained enigmatic. Silicate run-products from HSE solubility experiments are commonly contaminated by dispersed metal inclusions that hinder the measurement of element concentrations in the melt. The resulting uncertainty over the true solubility and metal-silicate partitioning of these elements has made it difficult to predict their expected depletion in PUM. Recently, several studies have employed changes to the experimental design used for high pressure and temperature solubility experiments in order to suppress the formation of metal inclusions. The addition of Au (Re, Os, Ir, Ru experiments) or elemental Si (Pt experiments) to the sample acts to alter either the geometry or rate of sample reduction respectively, in order to avoid transient metal oversaturation of the silicate melt. This contribution outlines procedures for using the piston-cylinder and multi-anvil apparatus to conduct solubility and metal-silicate partitioning experiments respectively. A protocol is also described for the synthesis of uncontaminated run-products from HSE solubility experiments in which the oxygen fugacity is similar to that during terrestrial core-formation. Time-resolved LA-ICP-MS spectra are presented as evidence for the absence of metal-inclusions in run-products from earlier studies, and also confirm that the technique may be extended to investigate Ru. Examples are also given of how these data may be applied. PMID:26132380

  10. Modeling the Accuracy of Three Detection Methods of Grapevine leafroll-associated virus 3 During the Dormant Period Using a Bayesian Approach.

    Science.gov (United States)

    Olmos, Antonio; Bertolini, Edson; Ruiz-García, Ana B; Martínez, Carmen; Peiró, Rosa; Vidal, Eduardo

    2016-05-01

    Grapevine leafroll-associated virus 3 (GLRaV-3) has a worldwide distribution and is the most economically important virus that causes grapevine leafroll disease. Reliable, sensitive, and specific methods are required for the detection of the pathogen in order to assure the production of healthy plant material and control of the disease. Although different serological and nucleic acid-based methods have been developed for the detection of GLRaV-3, diagnostic parameters have not been established, and there is no gold standard method. Therefore, the main aim of this work was to determine the sensitivity, specificity, and likelihood ratios of three commonly used methods, including one serological test (double-antibody sandwich enzyme-linked immunosorbent assay [DAS-ELISA]) and two nucleic acid-based techniques (spot and conventional real-time reverse transcription-polymerase chain reaction [RT-PCR]). Latent class models using a Bayesian approach have been applied to determine diagnostic test parameters and to facilitate decision-making regarding diagnostic test selection. Statistical analysis has been based on the results of a total of 281 samples, which were collected during the dormant period from three different populations. The best-fit model out of the 49 implemented models revealed that DAS-ELISA was the most specific method (value = 0.99) and provided the highest degree of confidence in positive results. Conversely, conventional real-time RT-PCR was the most sensitive method (value = 0.98) and produced the highest degree of confidence in negative results. Furthermore, the estimation of likelihood ratios showed that in populations with low GLRaV-3 prevalence the most appropriate method could be DAS-ELISA, while conventional real-time RT-PCR could be the most appropriate method in medium or high prevalence populations. Combining both techniques significantly increases detection accuracy. The flexibility and power of Bayesian latent class models open new

  11. Wavelet Space Partitioning for Symbolic Time Series Analysis

    Institute of Scientific and Technical Information of China (English)

    Venkatesh Rajagopalan; Asok Ray

    2006-01-01

    @@ A crucial step in symbolic time series analysis (STSA) of observed data is symbol sequence generation that relies on partitioning the phase-space of the underlying dynamical system. We present a novel partitioning method,called wavelet-space (WS) partitioning, as an alternative to symbolic false nearest neighbour (SFNN) partitioning.While the WS and SFNN partitioning methods have been demonstrated to yield comparable performance for anomaly detection on laboratory apparatuses, computation of WS partitioning is several orders of magnitude faster than that of the SFNN partitioning.

  12. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    ZHAO HaiBo; WANG XiuMing

    2008-01-01

    Based on the three-pheee theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-pheee theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compres-sional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium.Additionally, the three-phase theory is reduced to the Blot's theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Blot's equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  13. Acoustic wave propagation simulation in a poroelastic medium saturated by two immiscible fluids using a staggered finite-difference with a time partition method

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Based on the three-phase theory proposed by Santos, acoustic wave propagation in a poroelastic medium saturated by two immiscible fluids was simulated using a staggered high-order finite-difference algorithm with a time partition method, which is firstly applied to such a three-phase medium. The partition method was used to solve the stiffness problem of the differential equations in the three-phase theory. Considering the effects of capillary pressure, reference pressure and coupling drag of two fluids in pores, three compressional waves and one shear wave predicted by Santos have been correctly simulated. Influences of the parameters, porosity, permeability and gas saturation on the velocities and amplitude of three compressional waves were discussed in detail. Also, a perfectly matched layer (PML) absorbing boundary condition was firstly implemented in the three-phase equations with a staggered-grid high-order finite-difference. Comparisons between the proposed PML method and a commonly used damping method were made to validate the efficiency of the proposed boundary absorption scheme. It was shown that the PML works more efficiently than the damping method in this complex medium. Additionally, the three-phase theory is reduced to the Biot’s theory when there is only one fluid left in the pores, which is shown in Appendix. This reduction makes clear that three-phase equation systems are identical to the typical Biot’s equations if the fluid saturation for either of the two fluids in the pores approaches to zero.

  14. Partitions and their lattices

    OpenAIRE

    kunz, Milan

    2006-01-01

    Ferrers graphs and tables of partitions are treated as vectors. Matrix operations are used for simple proofs of identities concerning partitions. Interpreting partitions as vectors gives a possibility to generalize partitions on negative numbers. Partitions are then tabulated into lattices and some properties of these lattices are studied. There appears a new identity counting Ferrers graphs packed consecutively into isoscele form. The lattices form the base for tabulating combinatorial ident...

  15. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t

  16. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  17. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  18. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  19. Revisiting k-means: New Algorithms via Bayesian Nonparametrics

    OpenAIRE

    Kulis, Brian; Jordan, Michael I.

    2011-01-01

    Bayesian models offer great flexibility for clustering applications---Bayesian nonparametrics can be used for modeling infinite mixtures, and hierarchical Bayesian models can be utilized for sharing clusters across multiple data sets. For the most part, such flexibility is lacking in classical clustering methods such as k-means. In this paper, we revisit the k-means clustering algorithm from a Bayesian nonparametric viewpoint. Inspired by the asymptotic connection between k-means and mixtures...

  20. Computationally efficient Bayesian tracking

    Science.gov (United States)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  1. Bayesian Games with Intentions

    OpenAIRE

    Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael

    2016-01-01

    We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  2. Partitioning Uncertain Workflows

    CERN Document Server

    Huberman, Bernardo A

    2015-01-01

    It is common practice to partition complex workflows into separate channels in order to speed up their completion times. When this is done within a distributed environment, unavoidable fluctuations make individual realizations depart from the expected average gains. We present a method for breaking any complex workflow into several workloads in such a way that once their outputs are joined, their full completion takes less time and exhibit smaller variance than when running in only one channel. We demonstrate the effectiveness of this method in two different scenarios; the optimization of a convex function and the transmission of a large computer file over the Internet.

  3. Relict snakes of North America and their relationships within Caenophidia, using likelihood-based Bayesian methods on mitochondrial sequences.

    Science.gov (United States)

    Pinou, Theodora; Vicario, Saverio; Marschner, Monique; Caccone, Adalgisa

    2004-08-01

    This paper focuses on the phylogenetic relationships of eight North American caenophidian snake species (Carphophis amoena, Contia tenuis, Diadophis punctatus, Farancia abacura, Farancia erytrogramma, Heterodon nasicus, Heterodon platyrhinos, and Heterodon simus) whose phylogenetic relationships remain controversial. Past studies have referred to these "relict" North American snakes either as colubrid, or as Neotropical dipsadids and/or xenodontids. Based on mitochondrial DNA ribosomal gene sequences and a likelihood-based Bayesian analysis, our study suggests that these North American snakes are not monophyletic and are nested within a group (Dipsadoidea) that contains the Dipsadidae, Xenodontidae, and Natricidae. In addition, we use the relationships proposed here to highlight putative examples of parallel evolution of hemipenial morphology among snake clades. PMID:15223038

  4. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  5. Strong ion-exchange centrifugal partition chromatography as an efficient method for the large-scale purification of glucosinolates.

    Science.gov (United States)

    Toribio, Alix; Nuzillard, Jean-Marc; Renault, Jean-Hugues

    2007-11-01

    The glucosinolates sinalbin and glucoraphanin were purified by strong ion-exchange displacement centrifugal partition chromatography (SIXCPC). The optimized conditions involved the biphasic solvent system ethyl acetate/n-butanol/water (3:2:5, v/v), the lipophilic anion-exchanger Aliquat 336 (trioctylmethylammonium chloride, 160 and 408 mM) and a sodium iodide solution (80 and 272 mM) as displacer. Amounts as high as 2.4 g of sinalbin and 2.6g of glucoraphanin were obtained in one step in 2.5 and 3.5h respectively, starting from 12 and 25 g of mustard and broccoli seed aqueous extracts, using a laboratory scale CPC column (200 mL inner volume). PMID:17904564

  6. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  7. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  8. Carbon partitioning as validation methods for crop yields and CO2 sequestration monitoring in Asia using a photosynthetic-sterility model

    Science.gov (United States)

    Kaneko, Daijiro; Yang, Peng; Kumakura, Toshiro

    2010-10-01

    Sustainability of world crop production and food security has become uncertain. The authors have developed an environmental research system called Remote Sensing Environmental Monitor (RSEM) for treating carbon sequestration by vegetation, grain production, desertification of Eurasian grassland, and CDM afforestation/ reforestation to a background of climate change and economic growth in rising Asian nations. The RSEM system involves vegetation photosynthesis and crop yield models for grains, including land-use classification, stomatal evaluation by surface energy fluxes, and daily monitoring for early warning. This paper presents a validation method for RSEM based on carbon partitioning in plants, focusing in particular on the effects of area sizes used in crop production statistics on carbon fixation and on sterility-based corrections to accumulated carbon sequestration values simulated using the RSEM photosynthesis model. The carbonhydrate in grains has the same chemical formula as cellulose in grain plants. The method proposed by partitioning the fixed carbon in harvested grains was used to investigate estimates of the amounts of carbon fixed, using the satellite-based RSEM model.

  9. 贝叶斯方法在水环境系统不确定性分析中的应用述评%A Review of Bayesian Methods and Their Application in Uncertainty Analysis of Water Environmental System

    Institute of Scientific and Technical Information of China (English)

    黄凯; 张晓玲

    2012-01-01

    贝叶斯方法是解决不确定问题的新思路,评述了以贝叶斯公式、贝叶斯统计推断及贝叶斯网络为基础的贝叶斯方法在水质评价、水环境模型参数识别、水环境管理及风险决策方面的应用.贝叶斯公式可很好地解决水质评价中监测数据、水质级别、水质标准等蕴含的不确定信息.贝叶斯统计推断耦合水环境模型为模型参数识别提供新方法,其应用难点为贝叶斯后验分布的计算.介绍了后验分布的离散贝叶斯算法、传统及改进MCMC算法.贝叶斯网络在水质评价、模型预测、水环境管理及风险决策中可同时考虑多个变量的综合作用,得到影响管理决策各因素的不确定性信息,为水环境的管理决策提供科学依据.%Bayesian methods provide new ideas for solving uncertainty problems in Water environmental system. Several Bayesian methods, such as Bayesian formula, Bayesian statistical inference and Bayesian networks, are commented on applying to water quality evaluation, parameters identification of water environment model, water environment manage ment and risk decision making. Bayesian formula can solve uncertain information of monitoring data, water quality grade and standard in water quality evaluation. Bayesian statistical inference coupling the water environmental model provides a new approach for model parameter identification. The posterior distribution calculation is the key of application of Bayes ian statistical inference. The Bayesian discrete algorithms based posterior distribution, the traditional and improved MC-MC algorithms are introduced. The application of Bayesian networks to water quality assessment, model prediction, wa ter environment management and risk decision making can take multiple variable into account simultaneously. Then the uncertain information of factors influencing on management decision making is obtained, which provides the scientific ba sis for water environmental

  10. An Entropy Search Portfolio for Bayesian Optimization

    OpenAIRE

    Shahriari, Bobak; Wang, Ziyu; Hoffman, Matthew W.; Bouchard-Côté, Alexandre; De Freitas, Nando

    2014-01-01

    Bayesian optimization is a sample-efficient method for black-box global optimization. How- ever, the performance of a Bayesian optimization method very much depends on its exploration strategy, i.e. the choice of acquisition function, and it is not clear a priori which choice will result in superior performance. While portfolio methods provide an effective, principled way of combining a collection of acquisition functions, they are often based on measures of past performance which can be misl...

  11. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  12. U.S. Department Of Energy's nuclear engineering education research: highlights of recent and current research-III. 6. Bayesian Methods for Radiation Dosimetry

    International Nuclear Information System (INIS)

    Bayesian methods have the advantage that uncertainty about quantities of interest is described with the help of probability densities. This fact makes it possible to use probability calculus to concatenate probability statements and derive probabilities for related quantities of interest 'coherently'. Here, we discuss the application of Bayesian methods to calibration for external dosimetry and to compartmental analysis for internal radiation dosimetry. We derived the calibration density for chromosome aberration data with imprecise radiation doses to describe the uncertainty of the neutron dose received by an individual with observed dicentric chromosome aberrations. For compartmental analysis, we derived the posterior densities, expectations, and standard deviations of the models' transfer rates and time-dependent expectations and standard deviations for the 45Ca activity in a bone surface compartment. We graphed and compared the calibration density for 143 observed dicentric aberrations in 128 cells for precise and imprecise doses and found greater variance for the unknown dose df for the case of imprecise calibration doses, as expected. In internal dosimetry, we derived the posterior densities, expectations, and standard deviations for the parameters of the compartmental model and predicted the time-dependent expectations and standard deviations for bone surface activities. Three-dimensional numerical integration yielded the following estimates for the transfer rates: ρ = 0.025 ± 0.021, σ = 0.021 ± 0.017, and τ = 0.006 ± 0.004. The expected bone surface activities (± standard deviation) at times 11, 60, 240, and 510 min after intake are 3.50±1.82, 7.70±1.48, 6.41±0.68, and 3.73± 0.72. These values are in good agreement with the observed activity values

  13. Adaptive Dynamic Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  14. Bayesian analysis toolkit - BAT

    International Nuclear Information System (INIS)

    Statistical treatment of data is an essential part of any data analysis and interpretation. Different statistical methods and approaches can be used, however the implementation of these approaches is complicated and at times inefficient. The Bayesian analysis toolkit (BAT) is a software package developed in C++ framework that facilitates the statistical analysis of the data using Bayesian theorem. The tool evaluates the posterior probability distributions for models and their parameters using Markov Chain Monte Carlo which in turn provide straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as simulated annealing, allow extraction of the global mode of the posterior. BAT sets a well-tested environment for flexible model definition and also includes a set of predefined models for standard statistical problems. The package is interfaced to other software packages commonly used in high energy physics, such as ROOT, Minuit, RooStats and CUBA. We present a general overview of BAT and its algorithms. A few physics examples are shown to introduce the spectrum of its applications. In addition, new developments and features are summarized.

  15. A Bayesian method for characterizing distributed micro-releases: II. inference under model uncertainty with short time-series data.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef; Fast P. (Lawrence Livermore National Laboratory, Livermore, CA); Kraus, M. (Peterson AFB, CO); Ray, J. P.

    2006-01-01

    Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.

  16. New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes

    International Nuclear Information System (INIS)

    A novel approach is presented in this paper for improving anisotropic diffusion PDE models, based on the Perona–Malik equation. A solution is proposed from an engineering perspective to adaptively estimate the parameters of the regularizing function in this equation. The goal of such a new adaptive diffusion scheme is to better preserve edges when the anisotropic diffusion PDE models are applied to image enhancement tasks. The proposed adaptive parameter estimation in the anisotropic diffusion PDE model involves self-organizing maps and Bayesian inference to define edge probabilities accurately. The proposed modifications attempt to capture not only simple edges but also difficult textural edges and incorporate their probability in the anisotropic diffusion model. In the context of the application of PDE models to image processing such adaptive schemes are closely related to the discrete image representation problem and the investigation of more suitable discretization algorithms using constraints derived from image processing theory. The proposed adaptive anisotropic diffusion model illustrates these concepts when it is numerically approximated by various discretization schemes in a database of magnetic resonance images (MRI), where it is shown to be efficient in image filtering and restoration applications

  17. An optimized Method to Identify RR Lyrae stars in the SDSS X Pan-STARRS1 Overlapping Area Using a Bayesian Generative Technique

    CERN Document Server

    Abbas, M A; Martin, N F; Kaiser, N; Burgett, W S; Huber, M E; Waters, C

    2014-01-01

    We present a method for selecting RR Lyrae (RRL) stars (or other type of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8,115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3pi survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ~77% and ~52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completene...

  18. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  19. Bayesian Analysis of Multivariate Probit Models

    OpenAIRE

    Siddhartha Chib; Edward Greenberg

    1996-01-01

    This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...

  20. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  1. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  2. Stability criteria for T-S fuzzy systems with interval time-varying delays and nonlinear perturbations based on geometric progression delay partitioning method.

    Science.gov (United States)

    Chen, Hao; Zhong, Shouming; Li, Min; Liu, Xingwen; Adu-Gyamfi, Fehrs

    2016-07-01

    In this paper, a novel delay partitioning method is proposed by introducing the theory of geometric progression for the stability analysis of T-S fuzzy systems with interval time-varying delays and nonlinear perturbations. Based on the common ratio α, the delay interval is unequally separated into multiple subintervals. A newly modified Lyapunov-Krasovskii functional (LKF) is established which includes triple-integral terms and augmented factors with respect to the length of every related proportional subintervals. In addition, a recently developed free-matrix-based integral inequality is employed to avoid the overabundance of the enlargement when dealing with the derivative of the LKF. This innovative development can dramatically enhance the efficiency of obtaining the maximum upper bound of the time delay. Finally, much less conservative stability criteria are presented. Numerical examples are conducted to demonstrate the significant improvements of this proposed approach. PMID:27138648

  3. Analytical applications of partitioning in aqueous two-phase systems: Exploring protein structural changes and protein-partner interactions in vitro and in vivo by solvent interaction analysis method.

    Science.gov (United States)

    Zaslavsky, Boris Y; Uversky, Vladimir N; Chait, Arnon

    2016-05-01

    This review covers the fundamentals of protein partitioning in aqueous two-phase systems (ATPS). Included is a review of advancements in the analytical application of solute partitioning in ATPS over the last two decades, with multiple examples of experimental data providing evidence that phase-forming polymers do not interact with solutes partitioned in ATPS. The partitioning of solutes is governed by the differences in solute interactions with aqueous media in the two phases. Solvent properties of the aqueous media in these two phases may be characterized and manipulated. The solvent interaction analysis (SIA) method, based on the solute partitioning in ATPS, may be used for characterization and analysis of individual proteins and their interactions with different partners. The current state of clinical proteomics regarding the discovery and monitoring of new protein biomarkers is discussed, and it is argued that the protein expression level in a biological fluid may be not the optimal focus of clinical proteomic research. Multiple examples of application of the SIA method for discovery of changes in protein structure and protein-partner interactions in biological fluids are described. The SIA method reveals new opportunities for discovery and monitoring structure-based protein biomarkers. PMID:26923390

  4. Gentile statistics and restricted partitions

    Indian Academy of Sciences (India)

    C S Srivatsan; M V N Murthy; R K Bhaduri

    2006-03-01

    In a recent paper (Tran et al, Ann. Phys. 311, 204 (2004)), some asymptotic number theoretical results on the partitioning of an integer were derived exploiting its connection to the quantum density of states of a many-particle system. We generalise these results to obtain an asymptotic formula for the restricted or coloured partitions $p_{k}^{s} (n)$, which is the number of partitions of an integer into the summand of th powers of integers such that each power of a given integer may occur utmost times. While the method is not rigorous, it reproduces the well-known asymptotic results for = 1 apart from yielding more general results for arbitrary values of .

  5. European emissions of HCFC-22 based on eleven years of high frequency atmospheric measurements and a Bayesian inversion method

    Science.gov (United States)

    Graziosi, F.; Arduini, J.; Furlani, F.; Giostra, U.; Kuijpers, L. J. M.; Montzka, S. A.; Miller, B. R.; O'Doherty, S. J.; Stohl, A.; Bonasoni, P.; Maione, M.

    2015-07-01

    HCFC-22 (CHClF2), a stratospheric ozone depleting substance and a powerful greenhouse gas, is the third most abundant anthropogenic halocarbon in the atmosphere. Primarily used in refrigeration and air conditioning systems, its global production and consumption have increased during the last 60 years, with the global increases in the last decade mainly attributable to developing countries. In 2007, an adjustment to the Montreal Protocol for Substances that Deplete the Ozone Layer called for an accelerated phase out of HCFCs, implying a 75% reduction (base year 1989) of HCFC production and consumption by 2010 in developed countries against the previous 65% reduction. In Europe HCFC-22 is continuously monitored at the two sites Mace Head (Ireland) and Monte Cimone (Italy). Combining atmospheric observations with a Bayesian inversion technique, we estimated fluxes of HCFC-22 from Europe and from eight macro-areas within it, over an 11-year period from January 2002 to December 2012, during which the accelerated restrictions on HCFCs production and consumption have entered into force. According to our study, the maximum emissions over the entire domain was in 2003 (38.2 ± 4.7 Gg yr-1), and the minimum in 2012 (12.1 ± 2.0 Gg yr-1); emissions continuously decreased between these years, except for secondary maxima in the 2008 and 2010. Despite such a decrease in regional emissions, background values of HCFC-22 measured at the two European stations over 2002-2012 are still increasing as a consequence of global emissions, in part from developing countries, with an average trend of ca 7.0 ppt yr-1. However, the observations at the two European stations show also that since 2008 a decrease in the global growth rate has occurred. In general, our European emission estimates are in good agreement with those reported by previous studies that used different techniques. Since the currently dominant emission source of HCFC-22 is from banks, we assess the banks' size and their

  6. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    International Nuclear Information System (INIS)

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few researches on a ranking of

  7. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few

  8. An imprecise Dirichlet model for Bayesian analysis of failure data including right-censored observations

    International Nuclear Information System (INIS)

    This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables

  9. An approximate inversion method of geoelectrical sounding data using linear and bayesian statistical approaches. Examples of Tritrivakely volcanic lake and Mahitsy area (central part of Madagascar)

    International Nuclear Information System (INIS)

    This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs

  10. An optimized method to identify RR Lyrae stars in the SDSS×Pan-STARRS1 overlapping area using a bayesian generative technique

    International Nuclear Information System (INIS)

    We present a method for selecting RR Lyrae (RRL) stars (or other types of variable stars) in the absence of a large number of multi-epoch data and light curve analyses. Our method uses color and variability selection cuts that are defined by applying a Gaussian Mixture Bayesian Generative Method (GMM) on 636 pre-identified RRL stars instead of applying the commonly used rectangular cuts. Specifically, our method selects 8115 RRL candidates (heliocentric distances < 70 kpc) using GMM color cuts from the Sloan Digital Sky Survey (SDSS) and GMM variability cuts from the Panoramic Survey Telescope and Rapid Response System 1 3π survey (PS1). Comparing our method with the Stripe 82 catalog of RRL stars shows that the efficiency and completeness levels of our method are ∼77% and ∼52%, respectively. Most contaminants are either non-variable main-sequence stars or stars in eclipsing systems. The method described here efficiently recovers known stellar halo substructures. It is expected that the current completeness and efficiency levels will further improve with the additional PS1 epochs (∼3 epochs per filter) that will be observed before the conclusion of the survey. A comparison between our efficiency and completeness levels using the GMM method to the efficiency and completeness levels using rectangular cuts that are commonly used yielded a significant increase in the efficiency level from ∼13% to ∼77% and an insignificant change in the completeness levels. Hence, we favor using the GMM technique in future studies. Although we develop it over the SDSS×PS1 footprint, the technique presented here would work well on any multi-band, multi-epoch survey for which the number of epochs is limited.

  11. On Fuzzy Bayesian Inference

    OpenAIRE

    Frühwirth-Schnatter, Sylvia

    1990-01-01

    In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)

  12. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  13. Computation of the eigenvalues of the Schroedinger equation by symplectic and trigonometrically fitted symplectic partitioned Runge-Kutta methods

    International Nuclear Information System (INIS)

    In this Letter we present an explicit symplectic method for the numerical solution of the Schroedinger equation. We also develop a modified symplectic integrator with the trigonometrically fitted property based on this method. Our new methods are tested on the computation of the eigenvalues of the one-dimensional harmonic oscillator, the doubly anharmonic oscillator and the Morse potential

  14. Mobile Application Partition Method Based on Tag in Cloud Environment%云环境下基于标记的移动应用划分方法

    Institute of Scientific and Technical Information of China (English)

    樊新; 高曙

    2015-01-01

    In order to solve the resource -constrained problem to mobile devices and save the energy consumption , a tag-based mobile application partition method was proposed .It is a way to partition the mobile application according to its functional structure in advance and tag the transferable application module .The abundant resources and strong ability of cloud computing technology were employed to process the information .And combining with transfer energy consumption model , it was determined whether the mobile application tag module was transferred to the cloud to remote execution or not .At last, through wireless net-work, the cloud execution results were return so as to extend equipment resources and save the energy consumption .%针对移动设备资源受限及节省其能耗的问题,提出了一种基于标记的移动应用划分方法。该方法根据移动应用功能结构对其进行划分,将可转移执行的应用模块进行标记,利用云计算丰富的资源和强大的信息处理能力,结合转移能耗模型,决定移动应用标记模块是否转移到云端远程执行,通过无线网络,返回云端执行结果,从而达到扩展移动设备资源,节省其能耗的目的。

  15. Bayesian Variable Selection in Spatial Autoregressive Models

    OpenAIRE

    Jesus Crespo Cuaresma; Philipp Piribauer

    2015-01-01

    This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging tech...

  16. Nomograms for Visualization of Naive Bayesian Classifier

    OpenAIRE

    Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz

    2004-01-01

    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...

  17. Subjective Bayesian Analysis: Principles and Practice

    OpenAIRE

    Goldstein, Michael

    2006-01-01

    We address the position of subjectivism within Bayesian statistics. We argue, first, that the subjectivist Bayes approach is the only feasible method for tackling many important practical problems. Second, we describe the essential role of the subjectivist approach in scientific analysis. Third, we consider possible modifications to the Bayesian approach from a subjectivist viewpoint. Finally, we address the issue of pragmatism in implementing the subjectivist approach.

  18. Bayesian Classification in Medicine: The Transferability Question *

    OpenAIRE

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann

    1981-01-01

    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  19. Fuzzy Functional Dependencies and Bayesian Networks

    Institute of Scientific and Technical Information of China (English)

    LIU WeiYi(刘惟一); SONG Ning(宋宁)

    2003-01-01

    Bayesian networks have become a popular technique for representing and reasoning with probabilistic information. The fuzzy functional dependency is an important kind of data dependencies in relational databases with fuzzy values. The purpose of this paper is to set up a connection between these data dependencies and Bayesian networks. The connection is done through a set of methods that enable people to obtain the most information of independent conditions from fuzzy functional dependencies.

  20. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  1. Bayesian Models of Brain and Behaviour

    OpenAIRE

    Penny, William

    2012-01-01

    This paper presents a review of Bayesian models of brain and behaviour. We first review the basic principles of Bayesian inference. This is followed by descriptions of sampling and variational methods for approximate inference, and forward and backward recursions in time for inference in dynamical models. The review of behavioural models covers work in visual processing, sensory integration, sensorimotor integration, and collective decision making. The review of brain models covers a range of...

  2. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  3. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  4. Regenerative partition structures

    OpenAIRE

    Gnedin, Alexander; Pitman, Jim

    2004-01-01

    We consider Kingman's partition structures which are regenerative with respect to a general operation of random deletion of some part. Prototypes of this class are the Ewens partition structures which Kingman characterised by regeneration after deletion of a part chosen by size-biased sampling. We associate each regenerative partition structure with a corresponding regenerative composition structure, which (as we showed in a previous paper) can be associated in turn with a regenerative random...

  5. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    Science.gov (United States)

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  6. Bayesian segmentation of hyperspectral images

    CERN Document Server

    Mohammadpour, Adel; Mohammad-Djafari, Ali

    2007-01-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  7. Bayesian segmentation of hyperspectral images

    Science.gov (United States)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  8. Skill Rating by Bayesian Inference

    OpenAIRE

    Di Fatta, Giuseppe; Haworth, Guy McCrossan; Regan, Kenneth W.

    2009-01-01

    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of ...

  9. Cover Tree Bayesian Reinforcement Learning

    OpenAIRE

    Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos

    2013-01-01

    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...

  10. Predicting volume of distribution with decision tree-based regression methods using predicted tissue:plasma partition coefficients

    OpenAIRE

    Freitas, Alex. A.; Limbu, Kriti; Ghafourian, Taravat

    2015-01-01

    Background Volume of distribution is an important pharmacokinetic property that indicates the extent of a drug’s distribution in the body tissues. This paper addresses the problem of how to estimate the apparent volume of distribution at steady state (Vss) of chemical compounds in the human body using decision tree-based regression methods from the area of data mining (or machine learning). Hence, the pros and cons of several different types of decision tree-based regression methods have been...

  11. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  12. Bayesian methods to restore and re build images: application to gamma-graphy and to photofission tomography; Methodes bayesiennes pour la restauration et la reconstruction d`images application a la gammagraphie et a la tomographie par photofissions

    Energy Technology Data Exchange (ETDEWEB)

    Stawinski, G

    1998-10-26

    Bayesian algorithms are developed to solve inverse problems in gamma imaging and photofission tomography. The first part of this work is devoted to the modeling of our measurement systems. Two models have been found for both applications: the first one is a simple conventional model and the second one is a cascaded point process model. EM and MCMC Bayesian algorithms for image restoration and image reconstruction have been developed for these models and compared. The cascaded point process model does not improve significantly the results previously obtained by the classical model. To original approaches have been proposed, which increase the results previously obtained. The first approach uses an inhomogeneous Markov Random Field as a prior law, and makes the regularization parameter spatially vary. However, the problem of the estimation of hyper-parameters has not been solved. In the case of the deconvolution of point sources, a second approach has been proposed, which introduces a high level prior model. The picture is modeled as a list of objects, whose parameters and number are unknown. The results obtained with this method are more accurate than those obtained with the conventional Markov Random Field prior model and require less computational costs. (author)

  13. Calibration of P/S amplitude ratios for seismic events in Xinjiang and its adjacent areas based on a Bayesian Kriging method

    Institute of Scientific and Technical Information of China (English)

    PAN Chang-zhou; JIN Ping; XIAO Wei-guo

    2007-01-01

    Correction maps of P/S amplitude ratios for seismic events distributed in Xinjiang, China and its adjacent areas were established using a Bayesian Kriging method for the two seismic stations WMQ and MAK. The relationship between correction maps and variations of along-path features was analyzed and the validity of applying the correction maps to improve performances of P/S discriminants for seismic discrimination was investigated. Results show that obtained correction maps can generally reflect event-station path effects upon corresponding P/S discriminants; and the correction of these effects could further reduce scatters of distance-corrected P/S measurements within earthquake and explosion populations as well as improve their discriminating performances if path effects are a significant factor of such scatters. For example, as corresponding Kriging correction map was applied, the misidentification rate of earthquakes by Pn(2~4 Hz)/Lg(2~4 Hz) at MAK was reduced from 16.3% to 5.2%.

  14. 基于贝叶斯估计的直流电法探测可靠性评价%Reliability Evaluation of DC Method Based on Bayesian

    Institute of Scientific and Technical Information of China (English)

    李丰军; 翁克瑞; 龚承柱

    2012-01-01

    This paper,using probability theory,proposed a method to evaluate the reliability of DC method based on Bayesian by studying the results of DC detection.In order to properly consider all kinds of information in processing,using the mixed Beta distribution,the prior distribution parameters and inheriting factor were identified in terms of historical samples,then used the posterior distribution to determine reliability of DC method and decision-making.Finally,combined No.5 Coal Mine of PingMei,calculated the reliability of different mining areas.The results show that this method can be used to prevent and control of water disasters by coal mining enterprises,which could improve the benefit of investment.%通过研究矿井直流电法探测结果,提出了基于贝叶斯估计的直流电法探测结果可靠性评价。为了合理地考虑评价过程中各种信息,使用了混合Beta先验分布,并根据历史样本确定了先验分布参数和继承因子,然后利用后验分布分析了直流电法探测结果的可靠性。最后,结合平煤股份五矿采区实际情况,分析了不同采区使用直流电法探测后效果,结果表明,该方法可以作为煤矿企业实施直流电法探测技术依据,提高水害防治的成本投资效益。

  15. Bayesian inference on proportional elections.

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  16. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  17. A parametric study on the buckling of functionally graded material plates with internal discontinuities using the partition of unity method

    OpenAIRE

    S Natarajan; Chakraborty, S.; M. Ganapathi; Subramaniam, M

    2013-01-01

    In this paper, the effect of local defects, viz., cracks and cutouts on the buckling behaviour of functionally graded material plates subjected to mechanical and thermal load is numerically studied. The internal discontinuities, viz., cracks and cutouts are represented independent of the mesh within the framework of the extended finite element method and an enriched shear flexible 4-noded quadrilateral element is used for the spatial discretization. The properties are assumed to vary only in ...

  18. The metabolic network of Clostridium acetobutylicum: Comparison of the approximate Bayesian computation via sequential Monte Carlo (ABC-SMC) and profile likelihood estimation (PLE) methods for determinability analysis.

    Science.gov (United States)

    Thorn, Graeme J; King, John R

    2016-01-01

    The Gram-positive bacterium Clostridium acetobutylicum is an anaerobic endospore-forming species which produces acetone, butanol and ethanol via the acetone-butanol (AB) fermentation process, leading to biofuels including butanol. In previous work we looked to estimate the parameters in an ordinary differential equation model of the glucose metabolism network using data from pH-controlled continuous culture experiments. Here we combine two approaches, namely the approximate Bayesian computation via an existing sequential Monte Carlo (ABC-SMC) method (to compute credible intervals for the parameters), and the profile likelihood estimation (PLE) (to improve the calculation of confidence intervals for the same parameters), the parameters in both cases being derived from experimental data from forward shift experiments. We also apply the ABC-SMC method to investigate which of the models introduced previously (one non-sporulation and four sporulation models) have the greatest strength of evidence. We find that the joint approximate posterior distribution of the parameters determines the same parameters as previously, including all of the basal and increased enzyme production rates and enzyme reaction activity parameters, as well as the Michaelis-Menten kinetic parameters for glucose ingestion, while other parameters are not as well-determined, particularly those connected with the internal metabolites acetyl-CoA, acetoacetyl-CoA and butyryl-CoA. We also find that the approximate posterior is strongly non-Gaussian, indicating that our previous assumption of elliptical contours of the distribution is not valid, which has the effect of reducing the numbers of pairs of parameters that are (linearly) correlated with each other. Calculations of confidence intervals using the PLE method back this up. Finally, we find that all five of our models are equally likely, given the data available at present. PMID:26561777

  19. Intrusion Intention Recognition Method Based on Dynamic Bayesian Networks%基于动态贝叶斯网络的入侵意图识别方法

    Institute of Scientific and Technical Information of China (English)

    吴庆涛; 王琦璟; 郑瑞娟

    2011-01-01

    在构建高层次攻击场景和处理复杂攻击时,入侵检测技术难以有效察觉入侵者的意图、识别攻击问的语义以及预测下一步攻击.为此,针对网络复杂攻击过程中的不确定性,提出一种基于动态贝叶斯网络的入侵意图识别方法,采用动态贝叶斯有向无环图实时表述攻击行为、意图与攻击目标之间的关联,应用概率推理方法预测入侵者的下一步攻击.实验结果反映入侵者的意图在入侵过程中的变化规律,验证该方法的有效性.%It is difficult to detect the intention of an intruder, identify semantics of attacks and predict further attacks effectively using intrusion detection methods in the construction of high-level attack scenario and disposal of sophisticated attack. Aiming at the uncertain question in the attack process, this paper presents an intrusion intention recognition method based on Dynamic Bayesian Networks(DBN). Directed Acyclic Graph(GAG) is used to express the connection between attack behavior, attack plan and objectives, and the probabilistic reasoning method is used to predict next attack. Experiments reflect the change law of intrusion intention in the attack process, and proves the feasibility and validity.

  20. Statistical assignment of DNA sequences using Bayesian phylogenetics

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Huelsenbeck, John P;

    2008-01-01

    We provide a new automated statistical method for DNA barcoding based on a Bayesian phylogenetic analysis. The method is based on automated database sequence retrieval, alignment, and phylogenetic analysis using a custom-built program for Bayesian phylogenetic analysis. We show on real data that...

  1. Improved adaptive splitting and selection: the hybrid training method of a classifier based on a feature space partitioning.

    Science.gov (United States)

    Jackowski, Konrad; Krawczyk, Bartosz; Woźniak, Michał

    2014-05-01

    Currently, methods of combined classification are the focus of intense research. A properly designed group of combined classifiers exploiting knowledge gathered in a pool of elementary classifiers can successfully outperform a single classifier. There are two essential issues to consider when creating combined classifiers: how to establish the most comprehensive pool and how to design a fusion model that allows for taking full advantage of the collected knowledge. In this work, we address the issues and propose an AdaSS+, training algorithm dedicated for the compound classifier system that effectively exploits local specialization of the elementary classifiers. An effective training procedure consists of two phases. The first phase detects the classifier competencies and adjusts the respective fusion parameters. The second phase boosts classification accuracy by elevating the degree of local specialization. The quality of the proposed algorithms are evaluated on the basis of a wide range of computer experiments that show that AdaSS+ can outperform the original method and several reference classifiers. PMID:24552506

  2. Bayesian exploratory factor analysis

    OpenAIRE

    Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...

  3. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  4. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...

  5. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...

  6. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  7. Nonparametric Bayesian Logic

    OpenAIRE

    Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L

    2012-01-01

    The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...

  8. Bayesian default probability models

    OpenAIRE

    Andrlíková, Petra

    2014-01-01

    This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...

  9. Data Clustering via Principal Direction Gap Partitioning

    OpenAIRE

    Abbey, Ralph; Diepenbrock, Jeremy; Langville, Amy; Meyer, Carl; Race, Shaina; Zhou, Dexin

    2012-01-01

    We explore the geometrical interpretation of the PCA based clustering algorithm Principal Direction Divisive Partitioning (PDDP). We give several examples where this algorithm breaks down, and suggest a new method, gap partitioning, which takes into account natural gaps in the data between clusters. Geometric features of the PCA space are derived and illustrated and experimental results are given which show our method is comparable on the datasets used in the original paper on PDDP.

  10. Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework

    DEFF Research Database (Denmark)

    Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John; Bjerg, Poul Løgstrup; Helmig, Rainer

    plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...

  11. A Bayesian Method for Detecting and Characterizing Allelic Heterogeneity and Boosting Signals in Genome-Wide Association Studies

    OpenAIRE

    Su, Zhan; Cardin, Niall; Consortium, the Wellcome Trust Case Control; Donnelly, Peter; Marchini, Jonathan

    2009-01-01

    The standard paradigm for the analysis of genome-wide association studies involves carrying out association tests at both typed and imputed SNPs. These methods will not be optimal for detecting the signal of association at SNPs that are not currently known or in regions where allelic heterogeneity occurs. We propose a novel association test, complementary to the SNP-based approaches, that attempts to extract further signals of association by explicitly modeling and estimating both unknown SNP...

  12. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  13. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Kuang-You Cheng

    2010-01-01

    Full Text Available This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  14. Inside-sediment partitioning of PAH, PCB and organochlorine compounds and inferences on sampling and normalization methods.

    Science.gov (United States)

    Opel, Oliver; Palm, Wolf-Ulrich; Steffen, Dieter; Ruck, Wolfgang K L

    2011-04-01

    Comparability of sediment analyses for semivolatile organic substances is still low. Neither screening of the sediments nor organic-carbon based normalization is sufficient to obtain comparable results. We are showing the interdependency of grain-size effects with inside-sediment organic-matter distribution for PAH, PCB and organochlorine compounds. Surface sediment samples collected by Van-Veen grab were sieved and analyzed for 16 PAH, 6 PCB and 18 organochlorine pesticides (OCP) as well as organic-matter content. Since bulk concentrations are influenced by grain-size effects themselves, we used a novel normalization method based on the sum of concentrations in the separate grain-size fractions of the sediments. By calculating relative normalized concentrations, it was possible to clearly show underlying mechanisms throughout a heterogeneous set of samples. Furthermore, we were able to show that, for comparability, screening at < 125 μm is best suited and can be further improved by additional organic-carbon normalization. PMID:21237542

  15. Genomic Predictions of Obesity Related Phenotypes in a Pig model using GBLUP and Bayesian Approaches

    DEFF Research Database (Denmark)

    Pant, Sameer Dinkar; Do, Duy Ngoc; Janss, Luc;

    Whole genome prediction (WGP) based on GBLUP and Bayesian models (e.g. A, B, C and R) are routinely used in animal breeding but have not been well tested in a genetic mapping population that segregates for QTLs. In our pig model experiment, purebred Duroc and Yorkshire sows were crossed with...... to partition genomic variances attributable to different SNP groups based on their biological and functional role via systems genetics / biology approaches. We apply different methods to group SNPs: (i) functional relevance of SNPs for obesity based on data mining, (ii) groups based on positions in...... the genome, and significance based on previous genome-wide association study in the same data set. Preliminary results from our studies in production pigs indicate that BPL models have higher accuracy but more bias than GBLUP method, although using different power parameters has no effect on...

  16. Development of an Origin Trace Method based on Bayesian Inference and Artificial Neural Network for Missing or Stolen Nuclear Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bin, Yim Ho; Min, Lee Seung; Min, Kim Kyung; Jeong, Hong Yoon; Kim, Jae Kwang [Nuclear Security Div., Daejeon (Korea, Republic of)

    2014-05-15

    Thus, 'to put nuclear materials under control' is an important issue for prosperity mankind. Unfortunately, numbers of illicit trafficking of nuclear materials have been increased for decades. Consequently, security of nuclear materials is recently spotlighted. After the 2{sup nd} Nuclear Security Summit in Seoul in 2012, the president of Korea had showed his devotion to nuclear security. One of the main responses for nuclear security related interest of Korea was to develop a national nuclear forensic support system. International Atomic Energy Agency (IAEA) published the document of Nuclear Security Series No.2 'Nuclear Forensics Support' in 2006 to encourage international cooperation of all IAEA member states for tracking nuclear attributions. There are two main questions related to nuclear forensics to answer in the document. The first question is 'what type of material is it?', and the second one is 'where did the material come from?' Korea Nuclear Forensic Library (K-NFL) and mathematical methods to trace origins of missing or stolen nuclear materials (MSNMs) are being developed by Korea Institute of Nuclear Non-proliferation and Control (KINAC) to answer those questions. Although the K-NFL has been designed to perform many functions, K-NFL is being developed to effectively trace the origin of MSNMs and tested to validate suitability of trace methods. New fuels and spent fuels need each trace method because of the different nature of data acquisition. An inductive logic was found to be appropriate for new fuels, which had values as well as a bistable property. On the other hand, machine learning was suitable for spent fuels, which were unable to measure, and thus needed simulation.

  17. Development of an Origin Trace Method based on Bayesian Inference and Artificial Neural Network for Missing or Stolen Nuclear Materials

    International Nuclear Information System (INIS)

    Thus, 'to put nuclear materials under control' is an important issue for prosperity mankind. Unfortunately, numbers of illicit trafficking of nuclear materials have been increased for decades. Consequently, security of nuclear materials is recently spotlighted. After the 2nd Nuclear Security Summit in Seoul in 2012, the president of Korea had showed his devotion to nuclear security. One of the main responses for nuclear security related interest of Korea was to develop a national nuclear forensic support system. International Atomic Energy Agency (IAEA) published the document of Nuclear Security Series No.2 'Nuclear Forensics Support' in 2006 to encourage international cooperation of all IAEA member states for tracking nuclear attributions. There are two main questions related to nuclear forensics to answer in the document. The first question is 'what type of material is it?', and the second one is 'where did the material come from?' Korea Nuclear Forensic Library (K-NFL) and mathematical methods to trace origins of missing or stolen nuclear materials (MSNMs) are being developed by Korea Institute of Nuclear Non-proliferation and Control (KINAC) to answer those questions. Although the K-NFL has been designed to perform many functions, K-NFL is being developed to effectively trace the origin of MSNMs and tested to validate suitability of trace methods. New fuels and spent fuels need each trace method because of the different nature of data acquisition. An inductive logic was found to be appropriate for new fuels, which had values as well as a bistable property. On the other hand, machine learning was suitable for spent fuels, which were unable to measure, and thus needed simulation

  18. Inverse problems in the Bayesian framework

    International Nuclear Information System (INIS)

    The history of Bayesian methods dates back to the original works of Reverend Thomas Bayes and Pierre-Simon Laplace: the former laid down some of the basic principles on inverse probability in his classic article ‘An essay towards solving a problem in the doctrine of chances’ that was read posthumously in the Royal Society in 1763. Laplace, on the other hand, in his ‘Memoirs on inverse probability’ of 1774 developed the idea of updating beliefs and wrote down the celebrated Bayes’ formula in the form we know today. Although not identified yet as a framework for investigating inverse problems, Laplace used the formalism very much in the spirit it is used today in the context of inverse problems, e.g., in his study of the distribution of comets. With the evolution of computational tools, Bayesian methods have become increasingly popular in all fields of human knowledge in which conclusions need to be drawn based on incomplete and noisy data. Needless to say, inverse problems, almost by definition, fall into this category. Systematic work for developing a Bayesian inverse problem framework can arguably be traced back to the 1980s, (the original first edition being published by Elsevier in 1987), although articles on Bayesian methodology applied to inverse problems, in particular in geophysics, had appeared much earlier. Today, as testified by the articles in this special issue, the Bayesian methodology as a framework for considering inverse problems has gained a lot of popularity, and it has integrated very successfully with many traditional inverse problems ideas and techniques, providing novel ways to interpret and implement traditional procedures in numerical analysis, computational statistics, signal analysis and data assimilation. The range of applications where the Bayesian framework has been fundamental goes from geophysics, engineering and imaging to astronomy, life sciences and economy, and continues to grow. There is no question that Bayesian

  19. Thinning Invariant Partition Structures

    CERN Document Server

    Starr, Shannon

    2011-01-01

    A partition structure is a random point process on $[0,1]$ whose points sum to 1, almost surely. In the case that there are infinitely many points to begin with, we consider a thinning action by: first, removing points independently, such that each point survives with probability $p>0$; and, secondly, rescaling the remaining points by an overall factor to normalize the sum again to 1. We prove that the partition structures which are "thinning divisible" for a sequence of $p$'s converging to 0 are mixtures of the Poisson-Kingman partition structures. We also consider the property of being "thinning invariant" for all $p \\in (0,1)$.

  20. Convex Regression with Interpretable Sharp Partitions

    Science.gov (United States)

    Petersen, Ashley; Simon, Noah; Witten, Daniela

    2016-01-01

    We consider the problem of predicting an outcome variable on the basis of a small number of covariates, using an interpretable yet non-additive model. We propose convex regression with interpretable sharp partitions (CRISP) for this task. CRISP partitions the covariate space into blocks in a data-adaptive way, and fits a mean model within each block. Unlike other partitioning methods, CRISP is fit using a non-greedy approach by solving a convex optimization problem, resulting in low-variance fits. We explore the properties of CRISP, and evaluate its performance in a simulation study and on a housing price data set.