Zobitz, J. M.; Burns, S. P.; OgéE, J.; Reichstein, M.; Bowling, D. R.
2007-09-01
Separation of the net ecosystem exchange of CO2 (F) into its component fluxes of net photosynthesis (FA) and nonfoliar respiration (FR) is important in understanding the physical and environmental controls on these fluxes, and how these fluxes may respond to environmental change. In this paper, we evaluate a partitioning method based on a combination of stable isotopes of CO2 and Bayesian optimization in the context of partitioning methods based on regressions with environmental variables. We combined high-resolution measurements of stable carbon isotopes of CO2, ecosystem fluxes, and meteorological variables with a Bayesian parameter optimization approach to estimate FA and FR in a subalpine forest in Colorado, United States, over the course of 104 days during summer 2003. Results were generally in agreement with the independent environmental regression methods of Reichstein et al. (2005a) and Yi et al. (2004). Half-hourly posterior parameter estimates of FA and FR derived from the Bayesian/isotopic method showed a strong diurnal pattern in both, consistent with established gross photosynthesis (GEE) and total ecosystem respiration (TER) relationships. Isotope-derived FA was functionally dependent on light, but FR exhibited the expected temperature dependence only when the prior estimates for FR were temperature-based. Examination of the posterior correlation matrix revealed that the available data were insufficient to independently resolve all the Bayesian-estimated parameters in our model. This could be due to a small isotopic disequilibrium (?) between FA and FR, poor characterization of whole-canopy photosynthetic discrimination or the isotopic flux (isoflux, analogous to net ecosystem exchange of 13CO2). The positive sign of ? indicates that FA was more enriched in 13C than FR. Possible reasons for this are discussed in the context of recent literature.
Development of partitioning method
International Nuclear Information System (INIS)
A partitioning method has been developed under the concepts of separation of nuclides in high level nuclear fuel reprocessing liquid waste according to their half lives and radioactive toxicity and of disposal of them by suitable methods. In the partitioning process, which has been developed in JAERI, adoption of solvent extraction process with DIDPA (di-isodecyl phosphoric acid) has been studied for actinides separation. The present paper mainly describes studies on back extraction behavior of Np(IV), Pu(IV) and U(VI) in DIDPA. Most experiments were carried out according to following procedure. These actinides were extracted from 0.5 M nitric acid with DIDPA, where nitric acid concentration in HLW is expected to be adjusted to this value prior to actinides extraction in the partitioning process, and back-extracted with various reagents such as oxalic acid. The experimental results show that distribution ratios of Np(IV) and Pu(IV) can be reduced to less than unity with 1 M oxalic acid and those of U(VI) and Np(IV) with 5 M phosphoric acid. From results of these studies and previous research on Am and Cm, following possibilities were confirmed ; U, Pu, Np, Am and Cm, which are major actinides in HLW, can be extracted simultaneously with DIDPA, and they can be removed from DIDPA with various reagents. (nitric acid for Am and Cm, oxalic acid for Np and Pu, and phosphoric acid for U respectively). (author)
Development of partitioning method
International Nuclear Information System (INIS)
A partitioning method has been developed under the concepts of separating radioactive nuclides from a high-level waste according to their half lives and radioactive toxicity, and of disposing the waste safely. The partitioning test using about 18 liters (--220Ci) of the fuel reprocessing waste prepared at PNC has been started in October of 1982. In this test the behavior of radioactive nuclides was made clear. The present paper describes chemical behavior of non-radioactive elements contained in the high-level liquid waste in the extraction with di-isodecyl phosphoric acid (DIDPA). Distribution ratios of most of metal ions for DIDPA were less than 0.05, except that those of Mo, Zr and Fe were higher than 7. Ferric ion could not be back-extracted with 4 M HNO3, but with 0.5 M (COOH)2. In the extractiion with DIDPA, the third phase, which causes closing the settling banks or the flow paths in a mixer settler, was formed when the ferric ion concentration was over 0.02 M. This unfavorable phenomenon, however, was found to be suppressed by diluting the ferric ion concentration to lower than 0.01 M or by reducing ferric ion to ferrous ion. (author)
Directory of Open Access Journals (Sweden)
Ling Wang
Full Text Available BACKGROUND: Mammalian target of rapamycin (mTOR is a central controller of cell growth, proliferation, metabolism, and angiogenesis. Thus, there is a great deal of interest in developing clinical drugs based on mTOR. In this paper, in silico models based on multi-scaffolds were developed to predict mTOR inhibitors or non-inhibitors. METHODS: First 1,264 diverse compounds were collected and categorized as mTOR inhibitors and non-inhibitors. Two methods, recursive partitioning (RP and naïve Bayesian (NB, were used to build combinatorial classification models of mTOR inhibitors versus non-inhibitors using physicochemical descriptors, fingerprints, and atom center fragments (ACFs. RESULTS: A total of 253 models were constructed and the overall predictive accuracies of the best models were more than 90% for both the training set of 964 and the external test set of 300 diverse compounds. The scaffold hopping abilities of the best models were successfully evaluated through predicting 37 new recently published mTOR inhibitors. Compared with the best RP and Bayesian models, the classifier based on ACFs and Bayesian shows comparable or slightly better in performance and scaffold hopping abilities. A web server was developed based on the ACFs and Bayesian method (http://rcdd.sysu.edu.cn/mtor/. This web server can be used to predict whether a compound is an mTOR inhibitor or non-inhibitor online. CONCLUSION: In silico models were constructed to predict mTOR inhibitors using recursive partitioning and naïve Bayesian methods, and a web server (mTOR Predictor was also developed based on the best model results. Compound prediction or virtual screening can be carried out through our web server. Moreover, the favorable and unfavorable fragments for mTOR inhibitors obtained from Bayesian classifiers will be helpful for lead optimization or the design of new mTOR inhibitors.
A Bayesian Approach to the Partitioning of Workflows
Chua, Freddy C
2015-01-01
When partitioning workflows in realistic scenarios, the knowledge of the processing units is often vague or unknown. A naive approach to addressing this issue is to perform many controlled experiments for different workloads, each consisting of multiple number of trials in order to estimate the mean and variance of the specific workload. Since this controlled experimental approach can be quite costly in terms of time and resources, we propose a variant of the Gibbs Sampling algorithm that uses a sequence of Bayesian inference updates to estimate the processing characteristics of the processing units. Using the inferred characteristics of the processing units, we are able to determine the best way to split a workflow for processing it in parallel with the lowest expected completion time and least variance.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...
Development of partitioning method
International Nuclear Information System (INIS)
The present paper describes the examination of the possibility to improve denitration and extraction processes by adding oxalic acid in the partitioning process which has been developed for the purpose of separating high-level liquid waste (HLW) into a few groups of elements. First, the effect of oxalic acid in the denitration of HLW was examined to reduce the amount of the precipitate formed during the denitration. As a result, it was found that it was possible to reduce the precipitation of molybdenum, zirconium and tellurium. However, some elements precipitated at any concentration of oxalic acid. The addition of oxalic acid increased the amounts of precipitates of neodymium which was the representative of transuranic elements and strontium which was a troublesome element because of its heat generation. At the extraction process with DIDPA (diisodecyl phosphoric acid), oxalic acid was expected to prevent the third phase formation caused by iron, by making a complex with iron. However, the result showed that oxalic acid did not suppress the extraction of iron. The addition of oxalic acid was no effects on preventing the third phase formation. The influence of the presence of iron on the oxalate precipitation of rare earths was also examined in the present study. (author)
Development of partitioning method
International Nuclear Information System (INIS)
The literature survey was carried out on the amount of natural resources, behaviors in reprocessing process and in separation and recovery methods of the platinum group elements and technetium which are contained in spent fuel. The essential results are described below. (1) The platinum group elements, which are contained in spent fuel, are quantitatively limited, compared with total demand for them in Japan. And estimated separation and recovery cost is rather high. In spite of that, development of these techniques is considered to be very important because the supply of these elements is almost from foreign resources in Japan. (2) For recovery of these elements, studies of recovery from undisolved residue and from high level liquid waste (HLLW) also seem to be required. (3) As separation and recovery methods, following techniques are considered to be effective; lead extraction, liquid metal extraction, solvent extraction, ion-exchange, adsorption, precipitation, distillation, electrolysis or their combination. (4) But each of these methods has both advantages and disadvantages. So development of such processes largely depends on future works. (author) 94 refs
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Zhang, Zhen; Lim, Chae Young; Maiti, Tapabrata; Kato, Seiji
2016-01-01
In climate change study, the infrared spectral signatures of climate change have recently been conceptually adopted, and widely applied to identifying and attributing atmospheric composition change. We propose a Bayesian hierarchical model for spatial clustering of the high-dimensional functional data based on the effects of functional covariates and local features. We couple the functional mixed-effects model with a generalized spatial partitioning method for: (1) producing spatially contigu...
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
DEFF Research Database (Denmark)
Do, Duy Ngoc; Janss, L. L. G.; Strathe, Anders Bjerring;
Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL models had similar accuracy and bias as GBLUP method but use of...
Approximate path integral methods for partition functions
International Nuclear Information System (INIS)
We review several approximate methods for evaluating quantum mechanical partition functions with the goal of obtaining a method that is easy to implement for multidimensional systems but accurately incorporates quantum mechanical corrections to classical partition functions. A particularly promising method is one based upon an approximation to the path integral expression of the partition function. In this method, the partition-function expression has the ease of evaluation of a classical partition function, and quantum mechanical effects are included by a weight function. Anharmonicity is included exactly in the classical Boltzmann average and local quadratic expansions around the centroid of the quantum paths yield a simple analytic form for the quantum weight function. We discuss the relationship between this expression and previous approximate methods and present numerical comparisons for model one-dimensional potentials and for accurate three-dimensional vibrational force fields for H2O and SO2
Spatially Partitioned Embedded Runge--Kutta Methods
Ketcheson, David I.
2013-10-30
We study spatially partitioned embedded Runge--Kutta (SPERK) schemes for partial differential equations (PDEs), in which each of the component schemes is applied over a different part of the spatial domain. Such methods may be convenient for problems in which the smoothness of the solution or the magnitudes of the PDE coefficients vary strongly in space. We focus on embedded partitioned methods as they offer greater efficiency and avoid the order reduction that may occur in nonembedded schemes. We demonstrate that the lack of conservation in partitioned schemes can lead to nonphysical effects and propose conservative additive schemes based on partitioning the fluxes rather than the ordinary differential equations. A variety of SPERK schemes are presented, including an embedded pair suitable for the time evolution of fifth-order weighted nonoscillatory spatial discretizations. Numerical experiments are provided to support the theory.
Bayesian Methods for Medical Test Accuracy
Directory of Open Access Journals (Sweden)
Lyle D. Broemeling
2011-05-01
Full Text Available Bayesian methods for medical test accuracy are presented, beginning with the basic measures for tests with binary scores: true positive fraction, false positive fraction, positive predictive values, and negative predictive value. The Bayesian approach is taken because of its efficient use of prior information, and the analysis is executed with a Bayesian software package WinBUGS®. The ROC (receiver operating characteristic curve gives the intrinsic accuracy of medical tests that have ordinal or continuous scores, and the Bayesian approach is illustrated with many examples from cancer and other diseases. Medical tests include X-ray, mammography, ultrasound, computed tomography, magnetic resonance imaging, nuclear medicine and tests based on biomarkers, such as blood glucose values for diabetes. The presentation continues with more specialized methods suitable for measuring the accuracies of clinical studies that have verification bias, and medical tests without a gold standard. Lastly, the review is concluded with Bayesian methods for measuring the accuracy of the combination of two or more tests.
Bayesian Methods for Radiation Detection and Dosimetry
Groer, Peter G
2002-01-01
We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
inference algorithms based on the proposed prior representation for sparse channel estimation in orthogonal frequency-division multiplexing receivers. The inference algorithms, which are mainly obtained from variational Bayesian methods, exploit the underlying sparse structure of wireless channel responses......This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development of...... Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation of...
Self-complementary plane partitions by Proctor's minuscule method
Kuperberg, Greg
1994-01-01
A method of Proctor [European J. Combin. 5 (1984), no. 4, 331-350] realizes the set of arbitrary plane partitions in a box and the set of symmetric plane partitions as bases of linear representations of Lie groups. We extend this method by realizing transposition and complementation of plane partitions as natural linear transformations of the representations, thereby enumerating symmetric plane partitions, self-complementary plane partitions, and transpose-complement plane partitions in a new...
Advanced Bayesian Method for Planetary Surface Navigation
Center, Julian
2015-01-01
Autonomous Exploration, Inc., has developed an advanced Bayesian statistical inference method that leverages current computing technology to produce a highly accurate surface navigation system. The method combines dense stereo vision and high-speed optical flow to implement visual odometry (VO) to track faster rover movements. The Bayesian VO technique improves performance by using all image information rather than corner features only. The method determines what can be learned from each image pixel and weighs the information accordingly. This capability improves performance in shadowed areas that yield only low-contrast images. The error characteristics of the visual processing are complementary to those of a low-cost inertial measurement unit (IMU), so the combination of the two capabilities provides highly accurate navigation. The method increases NASA mission productivity by enabling faster rover speed and accuracy. On Earth, the technology will permit operation of robots and autonomous vehicles in areas where the Global Positioning System (GPS) is degraded or unavailable.
Bayesian Methods for Radiation Detection and Dosimetry
International Nuclear Information System (INIS)
We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model
Indian Academy of Sciences (India)
Tao Wei; Xiao Xiao Jin; Tian Jun Xu
2013-08-01
To understand the phylogenetic position of Bostrychus sinensis in Eleotridae and the phylogenetic relationships of the family, we determined the nucleotide sequence of the mitochondrial (mt) genome of Bostrychus sinensis. It is the first complete mitochondrial genome sequence of Bostrychus genus. The entire mtDNA sequence was 16508 bp in length with a standard set of 13 protein-coding genes, 22 transfer RNA genes (tRNAs), two ribosomal RNA genes (rRNAs) and a noncoding control region. The mitochondrial genome of B. sinensis had common features with those of other bony fishes with respect to gene arrangement, base composition, and tRNA structures. Phylogenetic hypotheses within Eleotridae fish have been controversial at the genus level. We used the mitochondrial cytochrome b (cytb) gene sequence to examine phylogenetic relationships of Eleotridae by using partitioned Bayesian method. When the specific models and parameter estimates were presumed for partitioning the total data, the harmonic mean –lnL was improved. The phylogenetic analysis supported the monophyly of Hypseleotris and Gobiomorphs. In addition, the Bostrychus were most closely related to Ophiocara, and the Philypnodon is also the sister to Microphlypnus, based on the current datasets. Further, extensive taxonomic sampling and more molecular information are needed to confirm the phylogenetic relationships in Eleotridae.
Bayesian methods in risk Assessment
International Nuclear Information System (INIS)
The need for a consistent framework for the analysis of large nuclear power plant safety, not provided for by conventional methods of statistics and reliability theory, prompted the present article. The qualification of uncertainties depends crucially on the particular way that the assessor views probability. Two principal schools of thought are the subjectivistic approach advocated by de Finetti, and the frequentist school advocated by von Mises. The point of view of the author is the subjective one. The foundations of the proposed approach and a discussion of several topics relevant to risk assessment follow with applications to the specialization of generic data for site-specific risk studies, the assessment of frequency of fires in nuclear plant compartments, and the use of expert opinion in risk assessments
parallelMCMCcombine: an R package for bayesian methods for big data and analytics.
Directory of Open Access Journals (Sweden)
Alexey Miroshnikov
Full Text Available Recent advances in big data and analytics research have provided a wealth of large data sets that are too big to be analyzed in their entirety, due to restrictions on computer memory or storage size. New Bayesian methods have been developed for data sets that are large only due to large sample sizes. These methods partition big data sets into subsets and perform independent Bayesian Markov chain Monte Carlo analyses on the subsets. The methods then combine the independent subset posterior samples to estimate a posterior density given the full data set. These approaches were shown to be effective for Bayesian models including logistic regression models, Gaussian mixture models and hierarchical models. Here, we introduce the R package parallelMCMCcombine which carries out four of these techniques for combining independent subset posterior samples. We illustrate each of the methods using a Bayesian logistic regression model for simulation data and a Bayesian Gamma model for real data; we also demonstrate features and capabilities of the R package. The package assumes the user has carried out the Bayesian analysis and has produced the independent subposterior samples outside of the package. The methods are primarily suited to models with unknown parameters of fixed dimension that exist in continuous parameter spaces. We envision this tool will allow researchers to explore the various methods for their specific applications and will assist future progress in this rapidly developing field.
Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods
Zhu, Weixuan
2016-01-01
The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...
Nested partitions method, theory and applications
Shi, Leyuan
2009-01-01
There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...
New parallel SOR method by domain partitioning
Energy Technology Data Exchange (ETDEWEB)
Xie, Dexuan [Courant Inst. of Mathematical Sciences New York Univ., NY (United States)
1996-12-31
In this paper, we propose and analyze a new parallel SOR method, the PSOR method, formulated by using domain partitioning together with an interprocessor data-communication technique. For the 5-point approximation to the Poisson equation on a square, we show that the ordering of the PSOR based on the strip partition leads to a consistently ordered matrix, and hence the PSOR and the SOR using the row-wise ordering have the same convergence rate. However, in general, the ordering used in PSOR may not be {open_quote}consistently ordered{close_quotes}. So, there is a need to analyze the convergence of PSOR directly. In this paper, we present a PSOR theory, and show that the PSOR method can have the same asymptotic rate of convergence as the corresponding sequential SOR method for a wide class of linear systems in which the matrix is {open_quotes}consistently ordered{close_quotes}. Finally, we demonstrate the parallel performance of the PSOR method on four different message passing multiprocessors (a KSR1, the Intel Delta, an Intel Paragon and an IBM SP2), along with a comparison with the point Red-Black and four-color SOR methods.
Bayesian individualization via sampling-based methods.
Wakefield, J
1996-02-01
We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585
Bayesian non- and semi-parametric methods and applications
Rossi, Peter
2014-01-01
This book reviews and develops Bayesian non-parametric and semi-parametric methods for applications in microeconometrics and quantitative marketing. Most econometric models used in microeconomics and marketing applications involve arbitrary distributional assumptions. As more data becomes available, a natural desire to provide methods that relax these assumptions arises. Peter Rossi advocates a Bayesian approach in which specific distributional assumptions are replaced with more flexible distributions based on mixtures of normals. The Bayesian approach can use either a large but fixed number
Bayesian clustering of DNA sequences using Markov chains and a stochastic partition model.
Jääskinen, Väinö; Parkkinen, Ville; Cheng, Lu; Corander, Jukka
2014-02-01
In many biological applications it is necessary to cluster DNA sequences into groups that represent underlying organismal units, such as named species or genera. In metagenomics this grouping needs typically to be achieved on the basis of relatively short sequences which contain different types of errors, making the use of a statistical modeling approach desirable. Here we introduce a novel method for this purpose by developing a stochastic partition model that clusters Markov chains of a given order. The model is based on a Dirichlet process prior and we use conjugate priors for the Markov chain parameters which enables an analytical expression for comparing the marginal likelihoods of any two partitions. To find a good candidate for the posterior mode in the partition space, we use a hybrid computational approach which combines the EM-algorithm with a greedy search. This is demonstrated to be faster and yield highly accurate results compared to earlier suggested clustering methods for the metagenomics application. Our model is fairly generic and could also be used for clustering of other types of sequence data for which Markov chains provide a reasonable way to compress information, as illustrated by experiments on shotgun sequence type data from an Escherichia coli strain. PMID:24246289
Bayesian method for system reliability assessment of overlapping pass/fail data
Institute of Scientific and Technical Information of China (English)
Zhipeng Hao; Shengkui Zeng; Jianbin Guo
2015-01-01
For high reliability and long life systems, system pass/fail data are often rare. Integrating lower-level data, such as data drawn from the subsystem or component pass/fail testing, the Bayesian analysis can improve the precision of the system reli-ability assessment. If the multi-level pass/fail data are overlapping, one chal enging problem for the Bayesian analysis is to develop a likelihood function. Since the computation burden of the existing methods makes them infeasible for multi-component systems, this paper proposes an improved Bayesian approach for the system reliability assessment in light of overlapping data. This approach includes three steps: fristly searching for feasible paths based on the binary decision diagram, then screening feasible points based on space partition and constraint decomposition, and final y sim-plifying the likelihood function. An example of a satel ite rol ing control system demonstrates the feasibility and the efficiency of the proposed approach.
Computational methods for Bayesian model choice
Robert, Christian P.; Wraith, Darren
2009-01-01
In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
Directory of Open Access Journals (Sweden)
Alexander Tilley
Full Text Available The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15N and (13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15N values and greater δ(13C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15N ≈ 2.7‰ and Δ(13C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions.
Tilley, Alexander; López-Angarita, Juliana; Turner, John R
2013-01-01
The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinus perezi on Glovers Reef Atoll, Belize. Bayesian stable isotope mixing models were used to investigate prey choice as well as viable Diet-Tissue Discrimination Factors for use with stingrays. Stingray δ(15)N values showed the greatest variation and a positive relationship with size, with an isotopic niche width approximately twice that of sympatric species. Shark species exhibited comparatively restricted δ(15)N values and greater δ(13)C variation, with very little overlap of stingray niche space. Mixing models suggest bivalves and annelids are proportionally more important prey in the stingray diet than crustaceans and teleosts at Glovers Reef, in contrast to all but one published diet study using stomach contents from other locations. Incorporating gut contents information from the literature, we suggest diet-tissue discrimination factors values of Δ(15)N ≈ 2.7‰ and Δ(13)C ≈ 0.9‰ for stingrays in the absence of validation experiments. The wide trophic niche and lower trophic level exhibited by stingrays compared to sympatric sharks supports their putative role as important base stabilisers in benthic systems, with the potential to absorb trophic perturbations through numerous opportunistic prey interactions. PMID:24236144
Bayesian regularisation methods in a hybrid MLP-HMM system.
Renals, Steve; MacKay, David
1993-01-01
We have applied Bayesian regularisation methods to multi-layer percepuon (MLP) training in the context of a hybrid MLP-HMM (hidden Markov model) continuous speech recognition system. The Bayesian framework adopted here allows an objective setting of the regularisation parameters, according to the training data. Experiments have been carried out on the ARPA Resource Management database.
A Bayesian method for microseismic source inversion
Pugh, D. J.; White, R. S.; Christie, P. A. F.
2016-08-01
Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability
Bayesian data analysis in population ecology: motivations, methods, and benefits
Dorazio, Robert
2016-01-01
During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.
Approximation methods for efficient learning of Bayesian networks
Riggelsen, C
2008-01-01
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
DEFF Research Database (Denmark)
Do, Duy Ngoc; Janss, Luc L G; Strathe, Anders B;
Improvement of feed efficiency is essential in pig breeding and selection for reduced residual feed intake (RFI) is an option. The study applied Bayesian Power LASSO (BPL) models with different power parameter to investigate genetic architecture, to predict genomic breeding values, and to partition...... genomic variance for RFI and daily feed intake (DFI). A total of 1272 Duroc pigs had both genotypic and phenotypic records for these traits. Significant SNPs were detected on chromosome 1 (SSC 1) and SSC 14 for RFI and on SSC 1 for DFI. BPL had similar accuracy and bias as GBLUP but power parameters had...
An Efficient Bayesian Iterative Method for Solving Linear Systems
Institute of Scientific and Technical Information of China (English)
Deng DING; Kin Sio FONG; Ka Hou CHAN
2012-01-01
This paper concerns with the statistical methods for solving general linear systems.After a brief review of Bayesian perspective for inverse problems,a new and efficient iterative method for general linear systems from a Bayesian perspective is proposed.The convergence of this iterative method is proved,and the corresponding error analysis is studied.Finally,numerical experiments are given to support the efficiency of this iterative method,and some conclusions are obtained.
HEURISTIC DISCRETIZATION METHOD FOR BAYESIAN NETWORKS
Directory of Open Access Journals (Sweden)
Mariana D.C. Lima
2014-01-01
Full Text Available Bayesian Network (BN is a classification technique widely used in Artificial Intelligence. Its structure is a Direct Acyclic Graph (DAG used to model the association of categorical variables. However, in cases where the variables are numerical, a previous discretization is necessary. Discretization methods are usually based on a statistical approach using the data distribution, such as division by quartiles. In this article we present a discretization using a heuristic that identifies events called peak and valley. Genetic Algorithm was used to identify these events having the minimization of the error between the estimated average for BN and the actual value of the numeric variable output as the objective function. The BN has been modeled from a database of Bit’s Rate of Penetration of the Brazilian pre-salt layer with 5 numerical variables and one categorical variable, using the proposed discretization and the division of the data by the quartiles. The results show that the proposed heuristic discretization has higher accuracy than the quartiles discretization.
Application of Bayesian Network Learning Methods to Land Resource Evaluation
Institute of Scientific and Technical Information of China (English)
HUANG Jiejun; HE Xiaorong; WAN Youchuan
2006-01-01
Bayesian network has a powerful ability for reasoning and semantic representation, which combined with qualitative analysis and quantitative analysis, with prior knowledge and observed data, and provides an effective way to deal with prediction, classification and clustering. Firstly, this paper presented an overview of Bayesian network and its characteristics, and discussed how to learn a Bayesian network structure from given data, and then constructed a Bayesian network model for land resource evaluation with expert knowledge and the dataset. The experimental results based on the test dataset are that evaluation accuracy is 87.5%, and Kappa index is 0.826. All these prove the method is feasible and efficient, and indicate that Bayesian network is a promising approach for land resource evaluation.
A novel multimode process monitoring method integrating LDRSKM with Bayesian inference
Institute of Scientific and Technical Information of China (English)
Shi-jin REN; Yin LIANG; Xiang-jun ZHAO; Mao-yun YANG
2015-01-01
A local discriminant regularized soft k-means (LDRSKM) method with Bayesian inference is proposed for multimode process monitoring. LDRSKM extends the regularized soft k-means algorithm by exploiting the local and non-local geometric information of the data and generalized linear discriminant analysis to provide a better and more meaningful data partition. LDRSKM can perform clustering and subspace selection simultaneously, enhancing the separability of data residing in different clusters. With the data partition obtained, kernel support vector data description (KSVDD) is used to establish the monitoring statistics and control limits. Two Bayesian inference based global fault detection indicators are then developed using the local monitoring results associated with principal and residual subspaces. Based on clustering analysis, Bayesian inference and mani-fold learning methods, the within and cross-mode correlations, and local geometric information can be exploited to enhance monitoring performances for nonlinear and non-Gaussian processes. The effectiveness and efficiency of the proposed method are evaluated using the Tennessee Eastman benchmark process.
Advanced Bayesian Methods for Lunar Surface Navigation Project
National Aeronautics and Space Administration — The key innovation of this project is the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with an...
Advanced Bayesian Methods for Lunar Surface Navigation Project
National Aeronautics and Space Administration — The key innovation of this project will be the application of advanced Bayesian methods to integrate real-time dense stereo vision and high-speed optical flow with...
A survey of current Bayesian gene mapping method
Molitor John; Marjoram Paul; Conti David; Thomas Duncan
2004-01-01
Abstract Recently, there has been much interest in the use of Bayesian statistical methods for performing genetic analyses. Many of the computational difficulties previously associated with Bayesian analysis, such as multidimensional integration, can now be easily overcome using modern high-speed computers and Markov chain Monte Carlo (MCMC) methods. Much of this new technology has been used to perform gene mapping, especially through the use of multi-locus linkage disequilibrium techniques. ...
Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology
Hortúa, Héctor J
2014-01-01
These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.
A new method for counting trees with vertex partition
Institute of Scientific and Technical Information of China (English)
2008-01-01
A direct and elementary method is provided in this paper for counting trees with vertex partition instead of recursion, generating function, functional equation, Lagrange inversion, and matrix methods used before.
Algebraic methods for evaluating integrals In Bayesian statistics
Lin, Shaowei
2011-01-01
The accurate evaluation of marginal likelihood integrals is a difficult fundamental problem in Bayesian inference that has important applications in machine learning and computational biology. Following the recent success of algebraic statistics in frequentist inference and inspired by Watanabe's foundational approach to singular learning theory, the goal of this dissertation is to study algebraic, geometric and combinatorial methods for computing Bayesian integrals effectively, and to explor...
The partition problem: case studies in Bayesian screening for time-varying model structure
Liu, Zesong; Windle, Jesse; Scott, James G.
2011-01-01
This paper presents two case studies of data sets where the main inferential goal is to characterize time-varying patterns in model structure. Both of these examples are seen to be general cases of the so-called "partition problem," where auxiliary information (in this case, time) defines a partition over sample space, and where different models hold for each element of the partition. In the first case study, we identify time-varying graphical structure in the covariance matrix of asset retur...
The bootstrap and Bayesian bootstrap method in assessing bioequivalence
International Nuclear Information System (INIS)
Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.
Baltic sea algae analysis using Bayesian spatial statistics methods
Directory of Open Access Journals (Sweden)
Eglė Baltmiškytė
2013-03-01
Full Text Available Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential are built to estimate the best model fitting for algae quantity prediction. Unknown model parameters are estimated and Bayesian kriging prediction posterior distribution is computed in OpenBUGS modeling environment by using Bayesian spatial statistics methods.
Tilley, Alexander; López-Angarita, Juliana; Turner, John R.
2013-01-01
The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of 15N and 13C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carcharhinu...
Alexander Tilley; Juliana López-Angarita; Turner, John R.
2013-01-01
The trophic ecology of epibenthic mesopredators is not well understood in terms of prey partitioning with sympatric elasmobranchs or their effects on prey communities, yet the importance of omnivores in community trophic dynamics is being increasingly realised. This study used stable isotope analysis of (15)N and (13)C to model diet composition of wild southern stingrays Dasyatis americana and compare trophic niche space to nurse sharks Ginglymostoma cirratum and Caribbean reef sharks Carchar...
Constructing Bayesian formulations of sparse kernel learning methods.
Cawley, Gavin C; Talbot, Nicola L C
2005-01-01
We present here a simple technique that simplifies the construction of Bayesian treatments of a variety of sparse kernel learning algorithms. An incomplete Cholesky factorisation is employed to modify the dual parameter space, such that the Gaussian prior over the dual model parameters is whitened. The regularisation term then corresponds to the usual weight-decay regulariser, allowing the Bayesian analysis to proceed via the evidence framework of MacKay. There is in addition a useful by-product associated with the incomplete Cholesky factorisation algorithm, it also identifies a subset of the training data forming an approximate basis for the entire dataset in the kernel-induced feature space, resulting in a sparse model. Bayesian treatments of the kernel ridge regression (KRR) algorithm, with both constant and heteroscedastic (input dependent) variance structures, and kernel logistic regression (KLR) are provided as illustrative examples of the proposed method, which we hope will be more widely applicable. PMID:16085387
Modelling LGD for unsecured retail loans using Bayesian methods
Katarzyna Bijak; Thomas, Lyn C
2015-01-01
Loss Given Default (LGD) is the loss borne by the bank when a customer defaults on a loan. LGD for unsecured retail loans is often found difficult to model. In the frequentist (non-Bayesian) two-step approach, two separate regression models are estimated independently, which can be considered potentially problematic when trying to combine them to make predictions about LGD. The result is a point estimate of LGD for each loan. Alternatively, LGD can be modelled using Bayesian methods. In the B...
Approximation methods for the partition functions of anharmonic systems
International Nuclear Information System (INIS)
The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations
Symplectic trigonometrically fitted partitioned Runge-Kutta methods
International Nuclear Information System (INIS)
The numerical integration of Hamiltonian systems is considered in this Letter. Trigonometrically fitted symplectic partitioned Runge-Kutta methods of second, third and fourth orders are constructed. The methods are tested on the numerical integration of the harmonic oscillator, the two body problem and an orbital problem studied by Stiefel and Bettis
Effective classification of 3D image data using partitioning methods
Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran
2002-03-01
We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.
Complexity analysis of accelerated MCMC methods for Bayesian inversion
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M.
2013-08-01
The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the
Application of an efficient Bayesian discretization method to biomedical data
Directory of Open Access Journals (Sweden)
Gopalakrishnan Vanathi
2011-07-01
Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.
Methods for Bayesian power spectrum inference with galaxy surveys
Jasche, Jens
2013-01-01
We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves over previous Bayesian methods by performing a joint inference of the three dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate sub samples. The method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal to noise regimes by using a determini...
Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods
Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari
2013-06-01
A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.
Gas/Aerosol partitioning: a simplified method for global modeling
Metzger, S.M.
2001-01-01
The main focus of this thesis is the development of a simplified method to routinely calculate gas/aerosol partitioning of multicomponent aerosols and aerosol associated water within global atmospheric chemistry and climate models. Atmospheric aerosols are usually multicomponent mixtures, partl
Bayesian Biclustering on Discrete Data: Variable Selection Methods
Guo, Lei
2013-01-01
Biclustering is a technique for clustering rows and columns of a data matrix simultaneously. Over the past few years, we have seen its applications in biology-related fields, as well as in many data mining projects. As opposed to classical clustering methods, biclustering groups objects that are similar only on a subset of variables. Many biclustering algorithms on continuous data have emerged over the last decade. In this dissertation, we will focus on two Bayesian biclustering algorithms we...
A nonparametric Bayesian method for estimating a response function
Brown, Scott; Meeden, Glen
2012-01-01
Consider the problem of estimating a response function which depends upon a non-stochastic independent variable under our control. The data are independent Bernoulli random variables where the probabilities of success are given by the response function at the chosen values of the independent variable. Here we present a nonparametric Bayesian method for estimating the response function. The only prior information assumed is that the response function can be well approximated by a mixture of st...
Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown......, and exact and optimisation-based heuristic solution methods for the model are described. All these methods are centered around the wellknown column generation technique. Di_erent practical applications of crew scheduling are presented, and some of these applications are considered in detail in four included...
Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method
Institute of Scientific and Technical Information of China (English)
WANG Wei(王玮); CAI LianHong(蔡莲红)
2003-01-01
Bayesian approach is an important method in statistics. The Bayesian belief network is a powerful knowledge representation and reasoning tool under the conditions of uncertainty.It is a graphics model that encodes probabilistic relationships among variables of interest. In this paper, an approach to Bayesian network construction is given for discovering the Chinese linguistic parameter relationship in the corpus.
Methods for Bayesian Power Spectrum Inference with Galaxy Surveys
Jasche, Jens; Wandelt, Benjamin D.
2013-12-01
We derive and implement a full Bayesian large scale structure inference method aiming at precision recovery of the cosmological power spectrum from galaxy redshift surveys. Our approach improves upon previous Bayesian methods by performing a joint inference of the three-dimensional density field, the cosmological power spectrum, luminosity dependent galaxy biases, and corresponding normalizations. We account for all joint and correlated uncertainties between all inferred quantities. Classes of galaxies with different biases are treated as separate subsamples. This method therefore also allows the combined analysis of more than one galaxy survey. In particular, it solves the problem of inferring the power spectrum from galaxy surveys with non-trivial survey geometries by exploring the joint posterior distribution with efficient implementations of multiple block Markov chain and Hybrid Monte Carlo methods. Our Markov sampler achieves high statistical efficiency in low signal-to-noise regimes by using a deterministic reversible jump algorithm. This approach reduces the correlation length of the sampler by several orders of magnitude, turning the otherwise numerically unfeasible problem of joint parameter exploration into a numerically manageable task. We test our method on an artificial mock galaxy survey, emulating characteristic features of the Sloan Digital Sky Survey data release 7, such as its survey geometry and luminosity-dependent biases. These tests demonstrate the numerical feasibility of our large scale Bayesian inference frame work when the parameter space has millions of dimensions. This method reveals and correctly treats the anti-correlation between bias amplitudes and power spectrum, which are not taken into account in current approaches to power spectrum estimation, a 20% effect across large ranges in k space. In addition, this method results in constrained realizations of density fields obtained without assuming the power spectrum or bias parameters
PARALLEL COMPOUND METHODS FOR SOLVING PARTITIONED STIFF SYSTEMS
Institute of Scientific and Technical Information of China (English)
Li-rong Chen; De-gui Liu
2001-01-01
This paper deals with the solution of partitioned systems of nonlinear stiff differential equations. Given a differential system, the user may specify some equations to be stiff and others to be nonstiff. For the numerical solution of such a system Parallel Compound Methods(PCMs) are studied. Nonstiff equations are integrated by a parallel explicit RK method while a parallel Rosenbrock method is used for the stiff part of the system. Their order conditions, their convergence and their numerical stability are discussed,and the numerical tests are conducted on a personal computer and a parallel computer.
Distance and extinction determination for APOGEE stars with Bayesian method
Wang, Jianling; Shi, Jianrong; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James
2016-08-01
Using a Bayesian technology, we derived distances and extinctions for over 100 000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from Two Micron All-Sky Survey, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC and Strömgren survey for Asteroseismology and Galactic Archaeology catalogues. These comparisons covers four orders of magnitude in the distance scale from 0.02 to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2 per cent to +3.6 per cent, and the dispersion ranges from 15 per cent to 25 per cent. The extinctions towards all stars are also derived and compared with those from several other independent methods: the Rayleigh-Jeans Colour Excess (RJCE) method, Gonzalez's 2D extinction map, as well as 3D extinction maps and models. The comparisons reveal that, overall, estimated extinctions agree very well, but RJCE tends to overestimate extinctions for cool stars and objects with low log g.
International Nuclear Information System (INIS)
Variance-based sensitivity analysis has been widely studied and asserted itself among practitioners. Monte Carlo simulation methods are well developed in the calculation of variance-based sensitivity indices but they do not make full use of each model run. Recently, several works mentioned a scatter-plot partitioning method to estimate the variance-based sensitivity indices from given data, where a single bunch of samples is sufficient to estimate all the sensitivity indices. This paper focuses on the space-partition method in the estimation of variance-based sensitivity indices, and its convergence and other performances are investigated. Since the method heavily depends on the partition scheme, the influence of the partition scheme is discussed and the optimal partition scheme is proposed based on the minimized estimator's variance. A decomposition and integration procedure is proposed to improve the estimation quality for higher order sensitivity indices. The proposed space-partition method is compared with the more traditional method and test cases show that it outperforms the traditional one
Dynamic model based on Bayesian method for energy security assessment
International Nuclear Information System (INIS)
Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method
Distance and extinction determination for APOGEE stars with Bayesian method
Wang, Jianling; Pan, Kaike; Chen, Bingqiu; Zhao, Yongheng; Wicker, James
2016-01-01
Using a Bayesian technology we derived distances and extinctions for over 100,000 red giant stars observed by the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey by taking into account spectroscopic constraints from the APOGEE stellar parameters and photometric constraints from 2MASS, as well as a prior knowledge on the Milky Way. Derived distances are compared with those from four other independent methods, the Hipparcos parallaxes, star clusters, APOGEE red clump stars, and asteroseismic distances from APOKASC (Rodrigues et al. 2014) and SAGA Catalogues (Casagrande et al. 2014). These comparisons covers four orders of magnitude in the distance scale from 0.02 kpc to 20 kpc. The results show that our distances agree very well with those from other methods: the mean relative difference between our Bayesian distances and those derived from other methods ranges from -4.2% to +3.6%, and the dispersion ranges from 15% to 25%. The extinctions toward all stars are also derived and compared wi...
Internal dosimetry of uranium isotopes using bayesian inference methods
International Nuclear Information System (INIS)
A group of personnel at Los Alamos National Laboratory is routinely monitored for the presence of uranium isotopes by urine bioassay. Samples are analysed by alpha spectroscopy, and the results are examined for evidence of an intake of uranium. Because the measurement uncertainties are often comparable to the quantities of material we wish to detect, statistical considerations are crucial for the proper interpretation of the data. The problem is further complicated by the significant, but highly non-uniform, presence of uranium in local drinking water and, in some cases, food supply. Software originally developed for internal dosimetry of plutonium has been adapted to the problem of uranium dosimetry. The software uses an unfolding algorithm to calculate an approximate Bayesian solution to the problem of characterising any intakes which may have occurred, given the history of urine bioassay results for each individual in the monitored population. The program uses biokinetic models from ICRP Publications 68 and later, and a prior probability distribution derived empirically from the body of uranium bioassay data collected at Los Alamos over the operating history of the Laboratory. For each individual, the software creates a posterior probability distribution of intake quantity and solubility type as a function of time. From this distribution, estimates are made of the cumulative committed dose (CEDE) to each individual. Results of the method are compared with those obtained using an earlier classical (non-Bayesian) algorithm for uranium dosimetry. We also discuss the problem of distinguishing occupational intakes from intake of environmental uranium, within a Bayesian framework. (author)
Directory of Open Access Journals (Sweden)
Velimir Gayevskiy
Full Text Available Bayesian inference methods are extensively used to detect the presence of population structure given genetic data. The primary output of software implementing these methods are ancestry profiles of sampled individuals. While these profiles robustly partition the data into subgroups, currently there is no objective method to determine whether the fixed factor of interest (e.g. geographic origin correlates with inferred subgroups or not, and if so, which populations are driving this correlation. We present ObStruct, a novel tool to objectively analyse the nature of structure revealed in Bayesian ancestry profiles using established statistical methods. ObStruct evaluates the extent of structural similarity between sampled and inferred populations, tests the significance of population differentiation, provides information on the contribution of sampled and inferred populations to the observed structure and crucially determines whether the predetermined factor of interest correlates with inferred population structure. Analyses of simulated and experimental data highlight ObStruct's ability to objectively assess the nature of structure in populations. We show the method is capable of capturing an increase in the level of structure with increasing time since divergence between simulated populations. Further, we applied the method to a highly structured dataset of 1,484 humans from seven continents and a less structured dataset of 179 Saccharomyces cerevisiae from three regions in New Zealand. Our results show that ObStruct provides an objective metric to classify the degree, drivers and significance of inferred structure, as well as providing novel insights into the relationships between sampled populations, and adds a final step to the pipeline for population structure analyses.
Computational Methods for Domain Partitioning of Protein Structures
Veretnik, Stella; Shindyalov, Ilya
Analysis of protein structures typically begins with decomposition of structure into more basic units, called "structural domains". The underlying goal is to reduce a complex protein structure to a set of simpler yet structurally meaningful units, each of which can be analyzed independently. Structural semi-independence of domains is their hallmark: domains often have compact structure and can fold or function independently. Domains can undergo so-called "domain shuffling"when they reappear in different combinations in different proteins thus implementing different biological functions (Doolittle, 1995). Proteins can then be conceived as being built of such basic blocks: some, especially small proteins, consist usually of just one domain, while other proteins possess a more complex architecture containing multiple domains. Therefore, the methods for partitioning a structure into domains are of critical importance: their outcome defines the set of basic units upon which structural classifications are built and evolutionary analysis is performed. This is especially true nowadays in the era of structural genomics. Today there are many methods that decompose the structure into domains: some of them are manual (i.e., based on human judgment), others are semiautomatic, and still others are completely automatic (based on algorithms implemented as software). Overall there is a high level of consistency and robustness in the process of partitioning a structure into domains (for ˜80% of proteins); at least for structures where domain location is obvious. The picture is less bright when we consider proteins with more complex architectures—neither human experts nor computational methods can reach consistent partitioning in many such cases. This is a rather accurate reflection of biological phenomena in general since domains are formed by different mechanisms, hence it is nearly impossible to come up with a set of well-defined rules that captures all of the observed cases.
Bayesian Monte Carlo method for nuclear data evaluation
International Nuclear Information System (INIS)
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)
ANALYSIS OF CLIQUE BY MATRIX FACTORIZATION AND PARTITION METHODS
Directory of Open Access Journals (Sweden)
Raghunath Kar
2011-10-01
Full Text Available In real life clustering of high dimensional data is a big problem. Tofind out the dense regions from increasing dimensions is one of them.We have already studied the clustering techniques of low dimensionaldata sets like k-means, k-mediod, BIRCH, CLARANS, CURE, DBScan, PAM etc. If a region is dense then it consists with number of data points with a minimum support of input parameter ø other wise itcannot take into clustering. So in this approach we have implementedCLIQUE to find out the clusters from multidimensional data sets. Indimension growth subspace clustering the clustering process start atsingle dimensional subspaces and grows upward to higher dimensionalones. It is a partition method where each dimension divided like a grid structure. In this paper the elimination of redundant objects from the regions by matrix factorization and partition method are implemented. The comparisons between CLIQUES with these two methods are studied. The redundant data point belongs to which region to form a cluster is also studied.
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Common before-after accident study on a road site: a low-informative Bayesian method
Brenac, Thierry
2009-01-01
This note aims at providing a Bayesian methodological basis for routine before-after accident studies, often applied to a single road site, and in conditions of limited resources in terms of time and expertise. Methods: A low-informative Bayesian method is proposed for before-after accident studies using a comparison site or group of sites. As compared to conventional statistics, the Bayesian approach is less subject to misuse and misinterpretation by practitioners. The low-informative framew...
Metainference: A Bayesian Inference Method for Heterogeneous Systems
Bonomi, Massimiliano; Cavalli, Andrea; Vendruscolo, Michele
2015-01-01
Modelling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model, and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system populates simultaneously an ensemble of different states and experimental data are measured as averages over such states. To address this problem we present a method, called metainference, that combines Bayesian inference, which is a powerful strategy to deal with errors in experimental measurements, with the maximum entropy principle, which represents a rigorous approach to deal with experimental measurements averaged over multiple states. To illustrate the method we present its application to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to model complex systems with...
A variational Bayesian method to inverse problems with impulsive noise
Jin, Bangti
2012-01-01
We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.
Bayesian methods in the search for MH370
Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason
2016-01-01
This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.
Chain ladder method: Bayesian bootstrap versus classical bootstrap
Peters, Gareth W.; Mario V. W\\"uthrich; Shevchenko, Pavel V.
2010-01-01
The intention of this paper is to estimate a Bayesian distribution-free chain ladder (DFCL) model using approximate Bayesian computation (ABC) methodology. We demonstrate how to estimate quantities of interest in claims reserving and compare the estimates to those obtained from classical and credibility approaches. In this context, a novel numerical procedure utilising Markov chain Monte Carlo (MCMC), ABC and a Bayesian bootstrap procedure was developed in a truly distribution-free setting. T...
Emulation: A fast stochastic Bayesian method to eliminate model space
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much
A high-resolution direction-of-arrival estimation based on Bayesian method
Institute of Scientific and Technical Information of China (English)
HUANG Jianguo; SUN Yi; XU Pu; LU Ying; LIU Kewei
2004-01-01
A Bayesian high-resolution direction-of-arrival (DOA) estimator is proposed based on the maximum a posteriori principle. The statistical performance of the Bayesian highresolution DOA estimator is also investigated. Comparison with MUSIC and Maximum likelihood estimator (MLE) shows that the Bayesian method has higher resolution and more accurate estimates for either incoherent or coherent sources. It is also more robust in the case of low SNR.
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided. PMID:26019004
Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods
Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N
2016-01-01
We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...
Partition wall structure in spent fuel storage pool and construction method for the partition wall
International Nuclear Information System (INIS)
A partitioning wall for forming cask pits as radiation shielding regions by partitioning inside of a spent fuel storage pool is prepared by covering both surface of a concrete body by shielding metal plates. The metal plate comprises opposed plate units integrated by welding while sandwiching a metal frame as a reinforcing material for the concrete body, the lower end of the units is connected to a floor of a pool by fastening members, and concrete is set while using the metal plate of the units as a frame to form the concrete body. The shielding metal plate has a double walled structure formed by welding a lining plate disposed on the outer surface of the partition wall and a shield plate disposed to the inner side. Then the term for construction can be shortened, and the capacity for storing spent fuels can be increased. (N.H.)
Developments from Programming the Partition Method for a Power Series Expansion
Kowalenko, Victor
2012-01-01
Recently, a novel method based on coding partitions [1]-[4] has been used to derive power series expansions to previously intractable problems. In this method the coefficients at $k$ are determined by summing the contributions made by each partition whose elements sum to $k$. These contributions are found by assigning values to each element and multiplying by an appropriate multinomial factor. This work presents a theoretical framework for the partition method for a power series expansion. To overcome the complexity due to the contributions, a programming methodology is created allowing more general problems to be studied than envisaged originally. The methodology uses the bi-variate recursive central partition (BRCP) algorithm, which is based on a tree-diagram approach to scanning partitions. Its main advantage is that partitions are generated in the multiplicity representation. During the development of the theoretical framework, scanning over partitions was seen as a discrete operation with an operator $L_...
Dirichlet Methods for Bayesian Source Detection in Radio Astronomy Images
Friedlander, A. M.
2014-02-01
The sheer volume of data to be produced by the next generation of radio telescopes - exabytes of data on hundreds of millions of objects - makes automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are low surface brightness objects, which are not well found by current automated methods. This thesis explores Bayesian methods for source detection that use Dirichlet or multinomial models for pixel intensity distributions in discretised radio astronomy images. A novel image discretisation method that incorporates uncertainty about how the image should be discretised is developed. Latent Dirichlet allocation - a method originally developed for inferring latent topics in document collections - is used to estimate source and background distributions in radio astronomy images. A new Dirichlet-multinomial ratio, indicating how well a region conforms to a well-specified model of background versus a loosely-specified model of foreground, is derived. Finally, latent Dirichlet allocation and the Dirichlet-multinomial ratio are combined for source detection in astronomical images. The methods developed in this thesis perform source detection well in comparison to two widely-used source detection packages and, importantly, find dim sources not well found by other algorithms.
Domain decomposition by the advancing-partition method for parallel unstructured grid generation
Pirzadeh, Shahyar Z. (Inventor); Banihashemi, legal representative, Soheila (Inventor)
2012-01-01
In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.
A Non-Parametric Bayesian Method for Inferring Hidden Causes
Wood, Frank; Griffiths, Thomas; Ghahramani, Zoubin
2012-01-01
We present a non-parametric Bayesian approach to structure learning with hidden causes. Previous Bayesian treatments of this problem define a prior over the number of hidden causes and use algorithms such as reversible jump Markov chain Monte Carlo to move between solutions. In contrast, we assume that the number of hidden causes is unbounded, but only a finite number influence observable variables. This makes it possible to use a Gibbs sampler to approximate the distribution over causal stru...
Metainference: A Bayesian inference method for heterogeneous systems.
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300
ESTIMATE OF THE HYPSOMETRIC RELATIONSHIP WITH NONLINEAR MODELS FITTED BY EMPIRICAL BAYESIAN METHODS
Directory of Open Access Journals (Sweden)
Monica Fabiana Bento Moreira
2015-09-01
Full Text Available In this paper we propose a Bayesian approach to solve the inference problem with restriction on parameters, regarding to nonlinear models used to represent the hypsometric relationship in clones of Eucalyptus sp. The Bayesian estimates are calculated using Monte Carlo Markov Chain (MCMC method. The proposed method was applied to different groups of actual data from which two were selected to show the results. These results were compared to the results achieved by the minimum square method, highlighting the superiority of the Bayesian approach, since this approach always generate the biologically consistent results for hipsometric relationship.
CEO emotional bias and dividend policy: Bayesian network method
Directory of Open Access Journals (Sweden)
Azouzi Mohamed Ali
2012-10-01
Full Text Available This paper assumes that managers, investors, or both behave irrationally. In addition, even though scholars have investigated behavioral irrationality from three angles, investor sentiment, investor biases and managerial biases, we focus on the relationship between one of the managerial biases, overconfidence and dividend policy. Previous research investigating the relationship between overconfidence and financial decisions has studied investment, financing decisions and firm values. However, there are only a few exceptions to examine how a managerial emotional bias (optimism, loss aversion and overconfidence affects dividend policies. This stream of research contends whether to distribute dividends or not depends on how managers perceive of the company’s future. I will use Bayesian network method to examine this relation. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some100 Tunisian executives. Our results have revealed that leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its dividend policy choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.
CEO emotional bias and investment decision, Bayesian network method
Directory of Open Access Journals (Sweden)
Jarboui Anis
2012-08-01
Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.
Energy Technology Data Exchange (ETDEWEB)
Kim, E.; Rhee, S.; Park, J. [Seoul National Univ. (Korea, Republic of). Dept. of Civil and Environmental Engineering
2009-07-01
A partitioning tracer method for characterizing petroleum contamination in heterogenous media was discussed. The average saturation level of nonaqueous phase liquids (NAPLs) was calculated by comparing the transport of the partitioning tracers to a conservative tracer. The NAPL saturation level represented a continuous value throughout the contaminated site. Experiments were conducted in a 2-D sandbox divided into 4 parts using different-sized sands. Soils were contaminated with a mixture of kerosene and diesel. Partitioning tracer tests were conducted both before and after contamination. A partitioning batch test was conducted to determine the partition coefficient (K) of the tracer between the NAPL and water. Breakthrough curves were obtained, and a retardation factor (R) was calculated. Results of the study showed that the calculated NAPL saturation was in good agreement with determined values. It was concluded that the partitioning tracer test is an accurate method of locating and quantifying NAPLs.
A generic method for estimating system reliability using Bayesian networks
International Nuclear Information System (INIS)
This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples
R. Ahmedi; T. Lanez
2015-01-01
In this work we present a theoretical approach for the determination of octanol/water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ferrocene was confirmed by comparison with known experimental values obtained mainly from literature. The results obtained show that calculated partition coefficien...
Crandell, Jamie L.; Voils, Corrine I.; Chang, YunKyung; Sandelowski, Margarete
2011-01-01
The possible utility of Bayesian methods for the synthesis of qualitative and quantitative research has been repeatedly suggested but insufficiently investigated. In this project, we developed and used a Bayesian method for synthesis, with the goal of identifying factors that influence adherence to HIV medication regimens. We investigated the effect of 10 factors on adherence. Recognizing that not all factors were examined in all studies, we considered standard methods for dealing with missin...
Errata: A survey of Bayesian predictive methods for model assessment, selection and comparison
Directory of Open Access Journals (Sweden)
Aki Vehtari
2014-03-01
Full Text Available Errata for “A survey of Bayesian predictive methods for model assessment, selection and comparison” by A. Vehtari and J. Ojanen, Statistics Surveys, 6 (2012, 142–228. doi:10.1214/12-SS102.
A Bayesian Assignment Method for Ambiguous Bisulfite Short Reads.
Directory of Open Access Journals (Sweden)
Hong Tran
Full Text Available DNA methylation is an epigenetic modification critical for normal development and diseases. The determination of genome-wide DNA methylation at single-nucleotide resolution is made possible by sequencing bisulfite treated DNA with next generation high-throughput sequencing. However, aligning bisulfite short reads to a reference genome remains challenging as only a limited proportion of them (around 50-70% can be aligned uniquely; a significant proportion, known as multireads, are mapped to multiple locations and thus discarded from downstream analyses, causing financial waste and biased methylation inference. To address this issue, we develop a Bayesian model that assigns multireads to their most likely locations based on the posterior probability derived from information hidden in uniquely aligned reads. Analyses of both simulated data and real hairpin bisulfite sequencing data show that our method can effectively assign approximately 70% of the multireads to their best locations with up to 90% accuracy, leading to a significant increase in the overall mapping efficiency. Moreover, the assignment model shows robust performance with low coverage depth, making it particularly attractive considering the prohibitive cost of bisulfite sequencing. Additionally, results show that longer reads help improve the performance of the assignment model. The assignment model is also robust to varying degrees of methylation and varying sequencing error rates. Finally, incorporating prior knowledge on mutation rate and context specific methylation level into the assignment model increases inference accuracy. The assignment model is implemented in the BAM-ABS package and freely available at https://github.com/zhanglabvt/BAM_ABS.
An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers
Chen, S.; Hanzo, L.
2000-01-01
An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.
Symplectic Partitioned Runge-Kutta Methods with Minimum Phase Lag - Case of 5 Stages
International Nuclear Information System (INIS)
In this work we consider explicit Symplectic Partitioned Runge-Kutta methods (SPRK) with five stages for problems with separable Hamiltonian. We construct a new method with constant coefficients third algebraic order and eighth phase-lag order.
Application of Bayesian methods to Dark Matter searches with XENON100
International Nuclear Information System (INIS)
The XENON100 experiment located in the LNGS Underground Lab in Italy, aims at the direct detection of WIMP dark matter (DM). It is currently the most sensitive detector for spin-independent WIMP-nucleus interaction. The DM analysis of XENON100 data is currently performed with a profile likelihood method after several cuts and data selection methods have been applied. A different model for the statistical analysis of data is the Bayesian interpretation. In the Bayesian approach to probability a prior probability (state of knowledge) is defined and updated for new sets of data to reject or accept a hypothesis. As an alternative approach a framework is being developed to implement Bayesian reasoning in the analysis. For this task the ''Bayesian Analysis Toolkit (BAT)'' will be used. Different models have to be implemented to identify background and (if there is a discovery) signal. We report on the current status of this work.
Updating reliability data using feedback analysis: feasibility of a Bayesian subjective method
International Nuclear Information System (INIS)
For years, EDF has used Probabilistic Safety Assessment to evaluate a global indicator of the safety of its nuclear power plants and to optimize the performance while ensuring a certain safety level. Therefore, robustness and relevancy of PSA are very important. That is the reason why EDF wants to improve the relevancy of the reliability parameters used in these models. This article aims to propose a Bayesian approach to build PSA parameters when feedback data is not large enough to use the frequentist method. Our method is called subjective because its purpose is to give engineers pragmatic criteria to apply Bayesian in a controlled and consistent way. Using Bayesian is quite common for example in the United States, because the nuclear power plants are less standardized. Bayesian is often used with generic data as prior. So we have to adapt the general methodology within EDF context. (authors)
International Nuclear Information System (INIS)
Classical methods of assessing the uncertainty associated with radiation doses estimated using cytogenetic techniques are now extremely well defined. However, several authors have suggested that a Bayesian approach to uncertainty estimation may be more suitable for cytogenetic data, which are inherently stochastic in nature. The Bayesian analysis framework focuses on identification of probability distributions (for yield of aberrations or estimated dose), which also means that uncertainty is an intrinsic part of the analysis, rather than an 'afterthought'. In this paper Bayesian, as well as some more advanced classical, data analysis methods for radiation cytogenetics are reviewed that have been proposed in the literature. A practical overview of Bayesian cytogenetic dose estimation is also presented, with worked examples from the literature. (authors)
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Bayesian Methods for Neural Networks and Related Models
Titterington, D.M.
2004-01-01
Models such as feed-forward neural networks and certain other structures investigated in the computer science literature are not amenable to closed-form Bayesian analysis. The paper reviews the various approaches taken to overcome this difficulty, involving the use of Gaussian approximations, Markov chain Monte Carlo simulation routines and a class of non-Gaussian but “deterministic” approximations called variational approximations.
Construction of symplectic (partitioned) Runge-Kutta methods with continuous stage
Tang, Wensheng; Lang, Guangming; Luo, Xuqiong
2015-01-01
Hamiltonian systems are one of the most important class of dynamical systems with a geometric structure called symplecticity and the numerical algorithms which can preserve such geometric structure are of interest. In this article we study the construction of symplectic (partitioned) Runge-Kutta methods with continuous stage, which provides a new and simple way to construct symplectic (partitioned) Runge-Kutta methods in classical sense. This line of construction of symplectic methods relies ...
Bayesian network modeling method based on case reasoning for emergency decision-making
Directory of Open Access Journals (Sweden)
XU Lei
2013-06-01
Full Text Available Bayesian network has the abilities of probability expression, uncertainty management and multi-information fusion.It can support emergency decision-making, which can improve the efficiency of decision-making.Emergency decision-making is highly time sensitive, which requires shortening the Bayesian Network modeling time as far as possible.Traditional Bayesian network modeling methods are clearly unable to meet that requirement.Thus, a Bayesian network modeling method based on case reasoning for emergency decision-making is proposed.The method can obtain optional cases through case matching by the functions of similarity degree and deviation degree.Then,new Bayesian network can be built through case adjustment by case merging and pruning.An example is presented to illustrate and test the proposed method.The result shows that the method does not have a huge search space or need sample data.The only requirement is the collection of expert knowledge and historical case models.Compared with traditional methods, the proposed method can reuse historical case models, which can reduce the modeling time and improve the efficiency.
A survey of Bayesian predictive methods for model assessment, selection and comparison
Directory of Open Access Journals (Sweden)
Aki Vehtari
2012-01-01
Full Text Available To date, several methods exist in the statistical literature formodel assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methodsare based are not always clearly stated in the original articles, however.The aim of this survey is to provide a unified review of Bayesian predictivemodel assessment and selection methods, and of methods closely related tothem. We review the various assumptions that are made in this context anddiscuss the connections between different approaches, with an emphasis onhow each method approximates the expected utility of using a Bayesianmodel for the purpose of predicting future data.
Pan, Zhongliang; Li, Wei; Shao, Qingyi; Chen, Ling
2011-12-01
In the design procedure of system on chip (SoC), it is needed to make use of hardware-software co-design technique owing to the great complexity of SoC. One of main steps in hardware-software co-design is how to carry out the partitioning of a system into hardware and software components. The efficient approaches for hardware-software partitioning can achieve good system performance, which is superior to the techniques that use software only or use hardware only. In this paper, a method based on neural networks is presented for the hardware-software partitioning of system on chip. The discrete Hopfield neural networks corresponding to the problem of hardware-software partitioning is built, the states of neural neurons are able to represent whether the required components or functionalities are to be implemented in hardware or software. An algorithm based on the principle of simulated annealing is designed, which can be used to compute the minimal energy states of neural networks, therefore the optimal partitioning schemes are obtained. The experimental results show that the hardware-software partitioning method proposed in this paper can obtain the near optimal partitioning for a lot of example circuits.
Analyzing bioassay data using Bayesian methods-A primer
International Nuclear Information System (INIS)
The classical statistics approach used in health physics for the interpretation of measurements is deficient in that it does not allow for the consideration of needle in a haystack effects, where events that are rare in a population are being detected. In fact, this is often the case in health physics measurements, and the false positive fraction is often very large using the prescriptions of classical statistics. Bayesian statistics provides an objective methodology to ensure acceptably small false positive fractions. The authors present the basic methodology and a heuristic discussion. Examples are given using numerically generated and real bioassay data (Tritium). Various analytical models are used to fit the prior probability distribution, in order to test the sensitivity to choice of model. Parametric studies show that the normalized Bayesian decision level kα-Lc/σ0, where σ0 is the measurement uncertainty for zero true amount, is usually in the range from 3 to 5 depending on the true positive rate. Four times σ0 rather than approximately two times σ0, as in classical statistics, would often seem a better choice for the decision level
Bayesian methods for the conformational classification of eight-membered rings
DEFF Research Database (Denmark)
Pérez, J.; Nolsøe, Kim; Kessler, M.;
2005-01-01
Two methods for the classification of eight-membered rings based on a Bayesian analysis are presented. The two methods share the same probabilistic model for the measurement of torsion angles, but while the first method uses the canonical forms of cyclooctane and, given an empirical sequence of e...... Structural Database (CSD)....
Recursive method for the Nekrasov partition function for classical Lie groups
International Nuclear Information System (INIS)
The Nekrasov partition function for supersymmetric gauge theories with general Lie groups is, so far, not known in a closed form, while there is a definition in terms of the integral. In this paper, as an intermediate step to derive the closed form, we give a recursion formula among partition functions, which can be derived from the integral. We apply the method to a toy model that reflects the basic structure of partition functions for BCD-type Lie groups and obtain a closed expression for the factor associated with the generalized Young diagram
Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs
Directory of Open Access Journals (Sweden)
Jeffrey O. Oseh (M.Sc.
2015-04-01
Full Text Available Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Bayesian Belief Network Method employed were validated and tested by unseen data to determine their accuracy and trend stability and were also compared with the findings obtained from Scaling equations. The obtained Bayesian Belief Network results indicated that the method showed an improved performance of predicting the amount of asphaltene precipitated in light oil reservoirs thus reducing the number of experiments required.
CEO Emotional Intelligence and Firms’ Financial Policies. Bayesian Network Method
Directory of Open Access Journals (Sweden)
Mohamed Ali Azouzi
2014-03-01
Full Text Available The aim of this paper is to explore the determinants of firms’ financial policies according to the manager’s psychological characteristics. More specifically, it examines the links between emotional intelligence, decision biases and the effectiveness of firms’ financial policies. The article finds that the main cause of an organization’s problems is the CEO’s emotional intelligence level. We introduce an approach based on Bayesian network techniques with a series of semi-directive interviews. The research paper represents an original approach because it characterizes behavioral corporate policy choices in emerging markets. To the best of our knowledge, this is the first study in the Tunisian context to explore this area of research. Our results show that Tunisian leaders adjust their decisions (on investments and distributions to minimize the risk of loss of compensation or reputation. They opt for decisions that minimize agency costs, transaction costs, and cognitive costs.
Comparison between standard unfolding and Bayesian methods in Bonner spheres neutron spectrometry
Energy Technology Data Exchange (ETDEWEB)
Medkour Ishak-Boushaki, G., E-mail: gmedkour@yahoo.com [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria); Allab, M. [Laboratoire SNIRM-Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumediene, BP 32 El-Alia BabEzzouar, Algiers (Algeria)
2012-10-11
This paper compares the use of both standard unfolding and Bayesian methods to analyze data extracted from neutron spectrometric measurements with a view to deriving some integral quantities characterizing a neutron field. We consider, as an example, the determination of the total neutron fluence and dose in the vicinity of an Am-Be source from Bonner spheres measurements. It is shown that the Bayesian analysis provides a rigorous estimation of these quantities and their correlated uncertainties and overcomes difficulties encountered in the standard unfolding methods.
Comparison between standard unfolding and Bayesian methods in Bonner spheres neutron spectrometry
International Nuclear Information System (INIS)
This paper compares the use of both standard unfolding and Bayesian methods to analyze data extracted from neutron spectrometric measurements with a view to deriving some integral quantities characterizing a neutron field. We consider, as an example, the determination of the total neutron fluence and dose in the vicinity of an Am–Be source from Bonner spheres measurements. It is shown that the Bayesian analysis provides a rigorous estimation of these quantities and their correlated uncertainties and overcomes difficulties encountered in the standard unfolding methods.
An overview of component qualification using Bayesian statistics and energy methods.
Energy Technology Data Exchange (ETDEWEB)
Dohner, Jeffrey Lynn
2011-09-01
The below overview is designed to give the reader a limited understanding of Bayesian and Maximum Likelihood (MLE) estimation; a basic understanding of some of the mathematical tools to evaluate the quality of an estimation; an introduction to energy methods and a limited discussion of damage potential. This discussion then goes on to presented a limited presentation as to how energy methods and Bayesian estimation are used together to qualify components. Example problems with solutions have been supplied as a learning aid. Bold letters are used to represent random variables. Un-bolded letter represent deterministic values. A concluding section presents a discussion of attributes and concerns.
A novel Bayesian imaging method for probabilistic delamination detection of composite materials
International Nuclear Information System (INIS)
A probabilistic framework for location and size determination for delamination in carbon–carbon composites is proposed in this paper. A probability image of delaminated area using Lamb wave-based damage detection features is constructed with the Bayesian updating technique. First, the algorithm for the probabilistic delamination detection framework using the proposed Bayesian imaging method (BIM) is presented. Next, a fatigue testing setup for carbon–carbon composite coupons is described. The Lamb wave-based diagnostic signal is then interpreted and processed. Next, the obtained signal features are incorporated in the Bayesian imaging method for delamination size and location detection, as well as the corresponding uncertainty bounds prediction. The damage detection results using the proposed methodology are compared with x-ray images for verification and validation. Finally, some conclusions are drawn and suggestions made for future works based on the study presented in this paper. (paper)
Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F
2016-01-01
We extend the localized operator partitioning method (LOPM) [J. Nagesh, A.F. Izmaylov, and P. Brumer, J. Chem. Phys. 142, 084114 (2015)] to the time-dependent density functional theory (TD-DFT) framework to partition molecular electronic energies of excited states in a rigorous manner. A molecular fragment is defined as a collection of atoms using Stratman-Scuseria-Frisch atomic partitioning. A numerically efficient scheme for evaluating the fragment excitation energy is derived employing a resolution of the identity to preserve standard one- and two-electron integrals in the final expressions. The utility of this partitioning approach is demonstrated by examining several excited states of two bichromophoric compounds: 9-((1-naphthyl)-methyl)-anthracene and 4-((2-naphthyl)-methyl)-benzaldehyde. The LOPM is found to provide nontrivial insights into the nature of electronic energy localization that are not accessible using simple density difference analysis.
Directory of Open Access Journals (Sweden)
R. Ahmedi
2015-07-01
Full Text Available In this work we present a theoretical approach for the determination of octanol/water partition coefficient of selected ferrocenes bearing different substituents, the calculation is based on the adaptation of the Rekker method. Our prediction of obtained theoretical partition coefficients values of logP for all studied substituted ferrocene was confirmed by comparison with known experimental values obtained mainly from literature. The results obtained show that calculated partition coefficients are in good agreement with experimental values. For estimation of the octanol/water partition coefficients of the selected compounds, the average absolute error of log P is 0.13, and The correlation coefficient is R2 = 0.966.
A method for partitioning cadmium bioaccumulated in small aquatic organisms
Energy Technology Data Exchange (ETDEWEB)
Siriwardena, S.N.; Rana, K.J.; Baird, D.J. [Univ. of Stirling (United Kingdom). Institute of Aquaculture
1995-09-01
A series of laboratory experiments was conducted to evaluate bioaccumulation and surface adsorption of aqueous cadmium (Cd) by sac-fry of the African tilapia Oreochromis niloticus. In the first experiment, the design consisted of two cadmium treatments: 15 {micro}g Cd{center_dot}L{sup {minus}1} in dilution water and a Cd-ethylenediaminetetraacetic acid (Cd-EDTA) complex at 15 {micro}m{center_dot}L{sup {minus}1}, and a water-only control. There were five replicates per treatment and 40 fish per replicate. It was found that EDTA significantly reduced the bioaccumulation of cadmium by tilapia sac-fry by 34%. Based on the results, a second experiment was conducted to evaluate four procedures: a no-rinse control; rinsing in EDTA; rinsing in distilled water; and rinsing in 5% nitric acid, for removing surface-bound Cd from exposed sac-fry. In this experiment, 30 fish in each of five replicates were exposed to 15 {micro}g Cd{center_dot}L{sup {minus}1} for 72 h, processed through the rinse procedures, and analyzed for total Cd. The EDTA rinse treatment significantly reduced (p<0.05) Cd concentrations of the exposed fish relative to those receiving no rinse. It was concluded that the EDTA rinse technique may be useful in studies evaluating the partitioning of surface-bound and accumulated cadmium in small aquatic organisms.
Overview of Bounded Support Distributions and Methods for Bayesian Treatment of Industrial Data
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Ettler, P.
Portugalsko: INSTICC – Institute for Systems and Technologies of Information, Control and Communication, 2013 - (Ferrier, Gusikhin, Madani, Sasiadek), s. 380-387 ISBN 978-989-8565-70-9. [10th international conference on informatics in control, automation and robotics (ICINCO 2013). Reykjavík (IS), 29.07.2013-31.07.2013] R&D Projects: GA MŠk 7D12004 Institutional support: RVO:67985556 Keywords : statistical analysis * Bayesian analysis * Truncated distributions Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2013/AS/dedecius-overview of bounded support distributions and methods for bayesian treatment of industrial data.pdf
Finding the Most Distant Quasars Using Bayesian Selection Methods
Mortlock, Daniel
2014-01-01
Quasars, the brightly glowing disks of material that can form around the super-massive black holes at the centres of large galaxies, are amongst the most luminous astronomical objects known and so can be seen at great distances. The most distant known quasars are seen as they were when the Universe was less than a billion years old (i.e., $\\sim\\!7%$ of its current age). Such distant quasars are, however, very rare, and so are difficult to distinguish from the billions of other comparably-bright sources in the night sky. In searching for the most distant quasars in a recent astronomical sky survey (the UKIRT Infrared Deep Sky Survey, UKIDSS), there were $\\sim\\!10^3$ apparently plausible candidates for each expected quasar, far too many to reobserve with other telescopes. The solution to this problem was to apply Bayesian model comparison, making models of the quasar population and the dominant contaminating population (Galactic stars) to utilise the information content in the survey measurements. The result wa...
The Relevance Voxel Machine (RVoxM): A Bayesian Method for Image-Based Prediction
DEFF Research Database (Denmark)
Sabuncu, Mert R.; Van Leemput, Koen
2011-01-01
This paper presents the Relevance VoxelMachine (RVoxM), a Bayesian multivariate pattern analysis (MVPA) algorithm that is specifically designed for making predictions based on image data. In contrast to generic MVPA algorithms that have often been used for this purpose, the method is designed to...
A General and Flexible Approach to Estimating the Social Relations Model Using Bayesian Methods
Ludtke, Oliver; Robitzsch, Alexander; Kenny, David A.; Trautwein, Ulrich
2013-01-01
The social relations model (SRM) is a conceptual, methodological, and analytical approach that is widely used to examine dyadic behaviors and interpersonal perception within groups. This article introduces a general and flexible approach to estimating the parameters of the SRM that is based on Bayesian methods using Markov chain Monte Carlo…
A novel Bayesian learning method for information aggregation in modular neural networks
DEFF Research Database (Denmark)
Wang, Pan; Xu, Lida; Zhou, Shang-Ming; Fan, Zhun; Li, Youfeng; Feng, Shan
2010-01-01
Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....
Nagesh, Jayashree; Brumer, Paul
2014-01-01
The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning we reformulate the method in a computationally e?cient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9-((1-naphthyl)-methyl)-anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH2-naphthyl groups as subsystems, and examine their electronic energies and populations for several excited states using Configuration Interaction Singles method. The implemented approach shows a wide variety o...
Landslide hazards mapping using uncertain Naïve Bayesian classification method
Institute of Scientific and Technical Information of China (English)
毛伊敏; 张茂省; 王根龙; 孙萍萍
2015-01-01
Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naïve Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naïve Bayesian classification method in Baota district of Yan’an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naïve Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves (AUC) in NBU and Naïve Bayesian algorithm are 87.29%and 82.47%respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.
Mueller, Julie M.; Loomis, John B.
2010-01-01
The choice of weights is a non-nested problem in most applied spatial econometric models. Despite numerous recent advances in spatial econometrics, the choice of spatial weights remains exogenously determined by the researcher in empirical applications. Bayesian techniques provide statistical evidence regarding the simultaneous choice of model specification and spatial weights matrices by using posterior probabilities. This paper demonstrates the Bayesian estimation approach in a spatial hedo...
OPTIMAL ERROR ESTIMATES OF THE PARTITION OF UNITY METHOD WITH LOCAL POLYNOMIAL APPROXIMATION SPACES
Institute of Scientific and Technical Information of China (English)
Yun-qing Huang; Wei Li; Fang Su
2006-01-01
In this paper, we provide a theoretical analysis of the partition of unity finite element method(PUFEM), which belongs to the family of meshfree methods. The usual error analysis only shows the order of error estimate to the same as the local approximations[12].Using standard linear finite element base functions as partition of unity and polynomials as local approximation space, in 1-d case, we derive optimal order error estimates for PUFEM interpolants. Our analysis show that the error estimate is of one order higher than the local approximations. The interpolation error estimates yield optimal error estimates for PUFEM solutions of elliptic boundary value problems.
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564
Comparasion of prediction and measurement methods for sound insulation of lightweight partitions
Directory of Open Access Journals (Sweden)
Praščević Momir
2012-01-01
Full Text Available It is important to know the sound insulation of partitions in order to be able to compare different constructions, calculate acoustic comfort in apartments or noise levels from outdoor sources such as road traffic, and find engineer optimum solutions to noise problems. The use of lightweight partitions as party walls between dwellings has become common because sound insulation requirements can be achieved with low overall surface weights. However, they need greater skill to design and construct, because the overall design is much more complex. It is also more difficult to predict and measure of sound transmission loss of lightweight partitions. There are various methods for predicting and measuring sound insulation of partitions and some of them will be described in this paper. Also, this paper presents a comparison of experimental results of the sound insulation of lightweight partitions with results obtained using different theoretical models for single homogenous panels and double panels with and without acoustic absorption in the cavity between the panels. [Projekat Ministarstva nauke Republike Srbije, br. TR-37020: Development of methodology and means for noise protection from urban areas i br. III-43014: Improvement of the monitoring system and the assessment of a long-term population exposure to pollutant substances in the environment using neural networks
Xu, Yue; Shi, Yong; Zheng, Xingyu; Long, Yi
2016-06-01
Fingerprint positioning method is generally the first choice in indoor navigation system due to its high accuracy and low cost. The accuracy depends on partition density to the indoor space. The accuracy will be higher with higher grid resolution. But the high grid resolution leads to significantly increasing work of the fingerprint data collection, processing and maintenance. This also might decrease the performance, portability and robustness of the navigation system. Meanwhile, traditional fingerprint positioning method use equational grid to partition the indoor space. While used for pedestrian navigation, sometimes a person can be located at the area where he or she cannot access. This paper studied these two issues, proposed a new indoor space partition method considering pedestrian accessibility, which can increase the accuracy of pedestrian position, and decrease the volume of the fingerprint data. Based on this proposed partition method, an optimized algorithm for fingerprint position was also designed. A across linker structure was used for fingerprint point index and matching. Experiment based on the proposed method and algorithm showed that the workload of fingerprint collection and maintenance were effectively decreased, and poisoning efficiency and accuracy was effectively increased
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
Energy Technology Data Exchange (ETDEWEB)
Xu, Jin; Yu, Yaming [Department of Statistics, University of California, Irvine, Irvine, CA 92697-1250 (United States); Van Dyk, David A. [Statistics Section, Imperial College London, Huxley Building, South Kensington Campus, London SW7 2AZ (United Kingdom); Kashyap, Vinay L.; Siemiginowska, Aneta; Drake, Jeremy; Ratzlaff, Pete [Smithsonian Astrophysical Observatory, 60 Garden Street, Cambridge, MA 02138 (United States); Connors, Alanna; Meng, Xiao-Li, E-mail: jinx@uci.edu, E-mail: yamingy@ics.uci.edu, E-mail: dvandyk@imperial.ac.uk, E-mail: vkashyap@cfa.harvard.edu, E-mail: asiemiginowska@cfa.harvard.edu, E-mail: jdrake@cfa.harvard.edu, E-mail: pratzlaff@cfa.harvard.edu, E-mail: meng@stat.harvard.edu [Department of Statistics, Harvard University, 1 Oxford Street, Cambridge, MA 02138 (United States)
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
International Nuclear Information System (INIS)
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use a principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.
N D
2009-01-01
There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such alg...
Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas
2015-12-01
The monotone homogeneity model (MHM-also known as the unidimensional monotone latent variable model) is a nonparametric IRT formulation that provides the underpinning for partitioning a collection of dichotomous items to form scales. Ellis (Psychometrika 79:303-316, 2014, doi: 10.1007/s11336-013-9341-5 ) has recently derived inequalities that are implied by the MHM, yet require only the bivariate (inter-item) correlations. In this paper, we incorporate these inequalities within a mathematical programming formulation for partitioning a set of dichotomous scale items. The objective criterion of the partitioning model is to produce clusters of maximum cardinality. The formulation is a binary integer linear program that can be solved exactly using commercial mathematical programming software. However, we have also developed a standalone branch-and-bound algorithm that produces globally optimal solutions. Simulation results and a numerical example are provided to demonstrate the proposed method. PMID:25850618
A Family of Trigonometrically-fitted Partitioned Runge-Kutta Symplectic Methods
International Nuclear Information System (INIS)
We are presenting a family of trigonometrically fitted partitioned Runge-Kutta symplectic methods of fourth order with six stages. The solution of the one dimensional time independent Schroedinger equation is considered by trigonometrically fitted symplectic integrators. The Schroedinger equation is first transformed into a Hamiltonian canonical equation. Numerical results are obtained for the one-dimensional harmonic oscillator and the exponential potential
Surveillance system and method having an operating mode partitioned fault classification model
Bickford, Randall L. (Inventor)
2005-01-01
A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Method for chemical amplification based on fluid partitioning in an immiscible liquid
Energy Technology Data Exchange (ETDEWEB)
Anderson, Brian L.; Colston, Bill W.; Elkin, Christopher J.
2015-06-02
A system for nucleic acid amplification of a sample comprises partitioning the sample into partitioned sections and performing PCR on the partitioned sections of the sample. Another embodiment of the invention provides a system for nucleic acid amplification and detection of a sample comprising partitioning the sample into partitioned sections, performing PCR on the partitioned sections of the sample, and detecting and analyzing the partitioned sections of the sample.
Using hierarchical Bayesian methods to examine the tools of decision-making
Directory of Open Access Journals (Sweden)
Michael D. Lee
2011-12-01
Full Text Available Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants. Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.
International Nuclear Information System (INIS)
Current reliability assessments of safety critical software embedded in the digital systems in nuclear power plants are based on the rule-based qualtitative assessment methods. But practical needs require the quantitative features of software reliability for Probabilistic Safety Assessment (PSA) that is one of important methods being used in assessing the whole safety of nuclear power plant. This paper discusses a Bayesian Belief Nets(BBN) based quantification method that models current qualitative software assessment in formal way and produces quantitative results required for PSA. Commercial Off-The-Shelf(COTS) software dedication process was applied to the discussed BBN based method for evaluating the plausibility of the method in PSA
A Bayesian hybrid method for context-sensitive spelling correction
Golding, A R
1996-01-01
Two classes of methods have been shown to be useful for resolving lexical ambiguity. The first relies on the presence of particular words within some distance of the ambiguous target word; the second uses the pattern of words and part-of-speech tags around the target word. These methods have complementary coverage: the former captures the lexical ``atmosphere'' (discourse topic, tense, etc.), while the latter captures local syntax. Yarowsky has exploited this complementarity by combining the two methods using decision lists. The idea is to pool the evidence provided by the component methods, and to then solve a target problem by applying the single strongest piece of evidence, whatever type it happens to be. This paper takes Yarowsky's work as a starting point, applying decision lists to the problem of context-sensitive spelling correction. Decision lists are found, by and large, to outperform either component method. However, it is found that further improvements can be obtained by taking into account not ju...
An Improved Approximate-Bayesian Model-choice Method for Estimating Shared Evolutionary History
Oaks, Jamie R.
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa...
Discovering Emergent Behaviors from Tracks Using Hierarchical Non-parametric Bayesian Methods
Chiron, Guillaume; Gomez-Krämer, Petra; Ménard, Michel
2014-01-01
International audience In video-surveillance, non-parametric Bayesian approaches based on a Hierarchical Dirichlet Process (HDP) have recently shown their efficiency for modeling crowed scene activities. This paper follows this track by proposing a method for detecting and clustering emergent behaviors across different captures made of numerous unconstrained trajectories. Most HDP applications for crowed scenes (e.g. traffic, pedestrians) are based on flow motion features. In contrast, we ...
Salima TAKTAK; AZOUZI Mohamed Ali; Triki, Mohamed
2013-01-01
This article discusses the effect of the entrepreneur’s profile on financing his creative project. It analyzes the impact of overconfidence on improving perceptions financing capacity of the project. To analyze this relationship we used networks as Bayesian data analysis method. Our sample is composed of 200 entrepreneurs. Our results show a high level of entrepreneur’s overconfidence positively affects the evaluation of financing capacity of the project.
Bayesian Belief Network Method for Predicting Asphaltene Precipitation in Light Oil Reservoirs
Jeffrey O. Oseh (M.Sc.); Olugbenga A. Falode (Ph.D)
2015-01-01
Asphaltene precipitation is caused by a number of factors including changes in pressure, temperature, and composition. The two most prevalent causes of asphaltene precipitation in light oil reservoirs are decreasing pressure and mixing oil with injected solvent in improved oil recovery processes. This study focused on predicting the amount of asphaltene precipitation with increasing Gas-Oil Ratio in a light oil reservoir using Bayesian Belief Network Method. These Artificial Intelligence-Baye...
A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach
Shuai Zhang; Chengyu Xi; Yan Wang; Wenyu Zhang; Yanhong Chen
2013-01-01
Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M re...
Impact of Frequentist and Bayesian Methods on Survey Sampling Practice: A Selective Appraisal
Rao, J. N. K.
2011-01-01
According to Hansen, Madow and Tepping [J. Amer. Statist. Assoc. 78 (1983) 776--793], "Probability sampling designs and randomization inference are widely accepted as the standard approach in sample surveys." In this article, reasons are advanced for the wide use of this design-based approach, particularly by federal agencies and other survey organizations conducting complex large scale surveys on topics related to public policy. Impact of Bayesian methods in survey sampling is also discussed...
Beelen P van; Verbruggen EMJ; Peijnenburg WJGM; ECO
2002-01-01
The equilibrium partitioning method (EqP-method) can be used to derive environmental quality standards (like the Maximum Permissible Concentration or the intervention value) for soil or sediment, from aquatic toxicity data and a soil/water or sediment/water partitioning coefficient. The validity of
Baltic sea algae analysis using Bayesian spatial statistics methods
Eglė Baltmiškytė; Kęstutis Dučinskas
2013-01-01
Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential) are built to estima...
Directory of Open Access Journals (Sweden)
Youhua Chen
2015-06-01
Full Text Available Variance partitioning methods, which are built upon multivariate statistics, have been widely applied in different taxa and habitats in community ecology. Here, I performed a literature review on the development and application of the methods, and then discussed the limitation of available methods and the difficulties involved in sampling schemes. The central goal of the work is then to propose some potential practical methods that might help to overcome different issues of traditional least-square-based regression modeling. A variety of regression models has been considered for comparison. In initial simulations, I identified that generalized additive model (GAM has the highest accuracy to predict variation components. Therefore, I argued that other advanced regression techniques, including the GAM and related models, could be utilized in variation partitioning for better quantifying the aggregation scenarios of species distribution.
Computationally intensive methods in Bayesian model-structure identification
Czech Academy of Sciences Publication Activity Database
Tesař, Ludvík
Adelaide: Advanced Knowledge International, 2004 - ( And rýsek, J.; Kárný, M.; Kracík, J.), s. 75-79. (International Series on Advanced Intelligence .. 9). ISBN 0-9751004-5-9. [Workshop on Computer-Intensive Methods in Control and Data Processing 2004. Prague (CZ), 12.05.2004-14.05.2004] R&D Projects: GA ČR GA102/03/0049; GA AV ČR IBS1075351 Institutional research plan: CEZ:AV0Z1075907 Keywords : structure identification * system identification * structure estimation Subject RIV: BD - Theory of Information
Bickel, David R
2011-01-01
The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null hypothesis leads the statistician to report a posterior probability of the hypothesis that is informed by both Bayesian and frequentist methodology, each weighted according how well the prior is known. Since neither the Bayesian approach nor the frequentist approach is entirely satisfactory in situations involving partial knowledge of the prior distribution, the proposed procedure reduces to a Bayesian method given complete knowledge of the prior, to a frequentist method given complete ignorance about the prior, and to a...
Liver segmentation in MRI: a fully automatic method based on stochastic partitions
López-Mir, Fernando; Naranjo Ornedo, Valeriana; Angulo, J.; Alcañiz Raya, Mariano Luis; Luna, L.
2014-01-01
There are few fully automated methods for liver segmentation in magnetic resonance images (MRI) despite the benefits of this type of acquisition in comparison to other radiology techniques such as computed tomography (CT). Motivated by medical requirements, liver segmentation in MRI has been carried out. For this purpose, we present a new method for liver segmentation based on the watershed transform and stochastic partitions. The classical watershed over-segmentation is reduced using a marke...
Dorn, C; Khan, A; Heng, K; Alibert, Y; Helled, R; Rivoldini, A; Benz, W
2016-01-01
We aim to present a generalized Bayesian inference method for constraining interiors of super Earths and sub-Neptunes. Our methodology succeeds in quantifying the degeneracy and correlation of structural parameters for high dimensional parameter spaces. Specifically, we identify what constraints can be placed on composition and thickness of core, mantle, ice, ocean, and atmospheric layers given observations of mass, radius, and bulk refractory abundance constraints (Fe, Mg, Si) from observations of the host star's photospheric composition. We employed a full probabilistic Bayesian inference analysis that formally accounts for observational and model uncertainties. Using a Markov chain Monte Carlo technique, we computed joint and marginal posterior probability distributions for all structural parameters of interest. We included state-of-the-art structural models based on self-consistent thermodynamics of core, mantle, high-pressure ice, and liquid water. Furthermore, we tested and compared two different atmosp...
Bayesian Blocks, A New Method to Analyze Structure in Photon Counting Data
Scargle, J D
1997-01-01
I describe a new time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: time-tagged photon events (TTE), binned counts, or time-to-spill (TTS) data. The output is the most likely segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant -- i.e. has a fixed intensity without statistically significant variations. Since the analysis is based on Bayesian statistics, I call the resulting structures Bayesian Blocks. Unlike most, this method does not stipulate time bins -- instead the data themselves determine a piecewise constant representation. Therefore the analysis procedure itself does not impose a lower limit to the time scale on which variability can be detected. Locations, amplitudes, and rise and decay times of pulses within a time series can be estimated, independent of any pulse-shape model -- but only if they d...
Bayesian approach to color-difference models based on threshold and constant-stimuli methods.
Brusola, Fernando; Tortajada, Ignacio; Lengua, Ismael; Jordá, Begoña; Peris, Guillermo
2015-06-15
An alternative approach based on statistical Bayesian inference is presented to deal with the development of color-difference models and the precision of parameter estimation. The approach was applied to simulated data and real data, the latter published by selected authors involved with the development of color-difference formulae using traditional methods. Our results show very good agreement between the Bayesian and classical approaches. Among other benefits, our proposed methodology allows one to determine the marginal posterior distribution of each random individual parameter of the color-difference model. In this manner, it is possible to analyze the effect of individual parameters on the statistical significance calculation of a color-difference equation. PMID:26193510
A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach
Directory of Open Access Journals (Sweden)
Shuai Zhang
2013-01-01
Full Text Available Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP to search for the optimal procurement scheme (OPS. Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services’ attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.
A new method for E-government procurement using collaborative filtering and Bayesian approach.
Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong
2013-01-01
Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach. PMID:24385869
International Nuclear Information System (INIS)
Estimating uncertainties on doses from bioassay data is of interest in epidemiology studies that estimate cancer risk from occupational exposures to radionuclides. Bayesian methods provide a logical framework to calculate these uncertainties. However, occupational exposures often consist of many intakes, and this can make the Bayesian calculation computationally intractable. This paper describes a novel strategy for increasing the computational speed of the calculation by simplifying the intake pattern to a single composite intake, termed as complex intake regime (CIR). In order to assess whether this approximation is accurate and fast enough for practical purposes, the method is implemented by the Weighted Likelihood Monte Carlo Sampling (WeLMoS) method and evaluated by comparing its performance with a Markov Chain Monte Carlo (MCMC) method. The MCMC method gives the full solution (all intakes are independent), but is very computationally intensive to apply routinely. Posterior distributions of model parameter values, intakes and doses are calculated for a representative sample of plutonium workers from the United Kingdom Atomic Energy cohort using the WeLMoS method with the CIR and the MCMC method. The distributions are in good agreement: posterior means and Q 0.025 and Q 0.975 quantiles are typically within 20 %. Furthermore, the WeLMoS method using the CIR converges quickly: a typical case history takes around 10-20 min on a fast workstation, whereas the MCMC method took around 12-hr. The advantages and disadvantages of the method are discussed. (authors)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
Directory of Open Access Journals (Sweden)
Vrushali M. Kulkarni
2015-06-01
Full Text Available This work reports a novel approach where three phase partitioning (TPP was combined with microwave for extraction of mangiferin from leaves of Mangifera indica. Soxhlet extraction was used as reference method, which yielded 57 mg/g in 5 h. Under optimal conditions such as microwave irradiation time 5 min, ammonium sulphate concentration 40% w/v, power 272 W, solute to solvent ratio 1:20, slurry to t-butanol ratio 1:1, soaking time 5 min and duty cycle 50%, the mangiferin yield obtained was 54 mg/g by microwave assisted three phase partitioning extraction (MTPP. Thus extraction method developed resulted into higher extraction yield in a shorter span, thereby making it an interesting alternative prior to down-stream processing.
Arratia, Richard; DeSalvo, Stephen
2011-01-01
We propose a new method, probabilistic divide-and-conquer, for improving the success probability in rejection sampling. For the example of integer partitions, there is an ideal recursive scheme which improves the rejection cost from asymptotically order $n^{3/4}$ to a constant. We show other examples for which a non--recursive, one--time application of probabilistic divide-and-conquer removes a substantial fraction of the rejection sampling cost. We also present a variation of probabilistic d...
Energy Technology Data Exchange (ETDEWEB)
Nagesh, Jayashree; Brumer, Paul [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Izmaylov, Artur F. [Chemical Physics Theory Group, Department of Chemistry, University of Toronto, Toronto, Ontario M5S 3H6 (Canada); Department of Physical and Environmental Sciences, University of Toronto, Scarborough, Toronto, Ontario M1C 1A4 (Canada)
2015-02-28
The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.
Bayesian inference for data assimilation using Least-Squares Finite Element methods
International Nuclear Information System (INIS)
It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.
The method of evaluating quantum partition function for the Hubbard model
International Nuclear Information System (INIS)
The method of evaluation of quantum partition function (QPF) in some four fermion models is proposed. The calculations are carried out by the path integral method. The integral is evaluated by introducing the additional fields (called Hubbard-Stratanovich transformation in some models), integration over fermionic variables, and considering the finite-dimensional approximation of rest integral over bosonic fields in the infinite limit. The result can be represented as a sum of the functional derivatives with respect to the arbitrary bosonic field of the quantum partition of free fermionic theory in the external bosonic field. This expression can be treated in a mean field approximation in closed form (the determinants corresponding to the arbitrary external field are substituted by its mean values corresponding to the mean value of the external fields). The quantum partition function is represented as the integral representation of the function. The approximation for the QPF of the free theory is considered, and the corresponding answer for QPF is studied. A convenient perturbation expansion for ln Z is developed. (author). 6 refs, 1 fig
Comparison of Two Partitioning Methods in a Fuzzy Time Series Model for Composite Index Forecasting
Directory of Open Access Journals (Sweden)
Lazim Abdullah,
2011-04-01
Full Text Available Study of fuzzy time series has increasingly attracted much attention due to its salient capabilities of tackling vague and incomplete data. A variety of forecasting models have devoted to improve forecasting accuracy. Recently, Fuzzy time-series based on Fibonacci sequence has been proposed as a new fuzzy time series model whichincorporates the concept of the Fibonacci sequence, the framework of basic fuzzy time series model and the weighted method. However, the issue on lengths of intervals has not been investigated by the highly acclaimed model despite already affirmed that length of intervals could affects forecasting results. Therefore the purpose of this paper is to propose two methods of defining interval lengths into fuzzy time-series based on Fibonacci sequence model and compare their performances. Frequency density-based partitioning and randomly chosen lengths of interval partitioning were tested into fuzzy time-series based on Fibonacci sequence model using stock index data and compared their performances. A two-year weekly period of Kuala Lumpur Composite Index stock index data was employed as experimental data sets. The results show that the frequency density based partitioning outperforms the randomly chosen length of interval. This result reaffirms the importance of defining the appropriate interval lengths in fuzzy time series forecasting performances.
Reginatto, Marcel; Zimbal, Andreas
2008-02-01
In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.
The study of partition and solidification with Super High Temperature Method
International Nuclear Information System (INIS)
On the study of fission products (FPs) management, cold experiments of Cs partition, noble metal recovery and solidification of residual FPs are carried out with Super High Temperature Method. FPs are melted or sintered without much additives in this method. Cs is separated by its vaporization at 1000degC. Noble metals are reduced to metal at 1800degC and recovered. Other FPs very small ceramics containing alkali earth, zirconium (Zr) and actinide (U, Pu, TRU). This process does not produce new type waste such as extraction solvent wastes. The rationalization of FPs management may be achieved by this method. (author)
A Bayesian calibration model for combining different pre-processing methods in Affymetrix chips
Directory of Open Access Journals (Sweden)
Richardson Sylvia
2008-12-01
Full Text Available Abstract Background In gene expression studies a key role is played by the so called "pre-processing", a series of steps designed to extract the signal and account for the sources of variability due to the technology used rather than to biological differences between the RNA samples. At the moment there is no commonly agreed gold standard pre-processing method and each researcher has the responsibility to choose one method, incurring the risk of false positive and false negative features arising from the particular method chosen. Results We propose a Bayesian calibration model that makes use of the information provided by several pre-processing methods and we show that this model gives a better assessment of the 'true' unknown differential expression between two conditions. We demonstrate how to estimate the posterior distribution of the differential expression values of interest from the combined information. Conclusion On simulated data and on the spike-in Latin Square dataset from Affymetrix the Bayesian calibration model proves to have more power than each pre-processing method. Its biological interest is demonstrated through an experimental example on publicly available data.
An objective method for partitioning the entire flood season into multiple sub-seasons
Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong; Liu, Pan
2015-09-01
Information on flood seasonality is required in many practical applications, such as seasonal frequency analysis and reservoir operation. Several statistical methods for identifying flood seasonality have been widely used, such as directional method (DS) and relative frequency (RF) method. However, using these methods, flood seasons are identified subjectively by visually assessing the temporal distribution of flood occurrences. In this study, a new method is proposed to identify flood seasonality and partition the entire flood season into multiple sub-seasons objectively. A statistical experiment was carried out to evaluate the performance of the proposed method. Results demonstrated that the proposed method performed satisfactorily. Then the proposed approach was applied to the Geheyan and Baishan Reservoirs, China, having different flood regimes. It is shown that the proposed method performs extremely well for the observed data, and is more objective than the traditional methods.
International Nuclear Information System (INIS)
We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed
International Nuclear Information System (INIS)
Single well tracer test (SWTT) has been widely used and accepted as a standard method for residual oil saturation (SOR) measurement in the field. The test involves injecting of the partitioning tracers into the reservoir, producing them back and matching their profiles using a suitable simulation program. Most of simulation programs were first developed for sandstone reservoir using single porosity model cannot be applied for highly heterogeneous reservoirs such as fractured basement and carbonate reservoirs. Therefore a simulation code in double porosity model is needed to simulate tracer flow in our fractured basement reservoirs. In this project, a finite-difference simulation code has been developed by following the Tang mathematical model to simulate the partitioning tracers in double porosity medium. The code was matched with several field tracer data and compare with results of the University of Texas chemical simulator showing an acceptable agreement between our program and the famous UTChem simulator. Besides, several experiments were conducted to measure residual oil saturation in 1D column and a 2D sandpad model. Results of the experiments show that the partitioning tracers can measure residual oil saturation in glass bead models with a relatively high accuracy when the flow velocity of tracer is sufficiently low. (author)
The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method
DEFF Research Database (Denmark)
Rezanova, Natalia Jurjevna; Ryan, David
The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP is fo...... using the depth-first search of the Branch & Bound tree. Preliminarily results are encouraging, showing that nearly all tested real-life instances produce integer solutions to the LP relaxation and solutions are found within a few seconds....... formulated as a set partitioning problem. The LP relaxation of the set partitioning formulation of the TDRP possesses strong integer properties. The proposed model is therefore solved via the LP relaxation and Branch & Price. Starting with a small set of drivers and train tasks assigned to the drivers within...... a certain time period, the LP relaxation of the set partitioning model is solved with column generation. If a feasible solution is not found, further drivers are gradually added to the problem or the optimization time period is increased. Fractions are resolved with a constraint branching strategy...
Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin
2016-05-01
The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.
A Study of New Method for Weapon System Effectiveness Evaluation Based on Bayesian Network
Institute of Scientific and Technical Information of China (English)
YAN Dai-wei; GU Liang-xian; PAN Lei
2008-01-01
As weapon system effectiveness is affected by many factors, its evaluation is essentially a multi-criterion decision making problem for its complexity. The evaluation model of the effectiveness is established on the basis of metrics architecture of the effectiveness. The Bayesian network, which is used to evaluate the effectiveness, is established based on the metrics architecture and the evaluation models. For getting the weights of the metrics by Bayesian network, subjective initial values of the weights are given, gradient ascent algorithm is adopted, and the reasonable values of the weights are achieved. And then the effectiveness of every weapon system project is gained. The weapon system, whose effectiveness is relative maximum, is the optimization system. The research result shows that this method can solve the problem of AHP method which evaluation results are not compatible to the practice results and overcome the shortcoming of neural network in multilayer and multi-criterion decision. The method offers a new approaeh for evaluating the effectiveness.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
An Advanced Bayesian Method for Short-Term Probabilistic Forecasting of the Generation of Wind Power
Directory of Open Access Journals (Sweden)
Antonio Bracale
2015-09-01
Full Text Available Currently, among renewable distributed generation systems, wind generators are receiving a great deal of interest due to the great economic, technological, and environmental incentives they involve. However, the uncertainties due to the intermittent nature of wind energy make it difficult to operate electrical power systems optimally and make decisions that satisfy the needs of all the stakeholders of the electricity energy market. Thus, there is increasing interest determining how to forecast wind power production accurately. Most the methods that have been published in the relevant literature provided deterministic forecasts even though great interest has been focused recently on probabilistic forecast methods. In this paper, an advanced probabilistic method is proposed for short-term forecasting of wind power production. A mixture of two Weibull distributions was used as a probability function to model the uncertainties associated with wind speed. Then, a Bayesian inference approach with a particularly-effective, autoregressive, integrated, moving-average model was used to determine the parameters of the mixture Weibull distribution. Numerical applications also are presented to provide evidence of the forecasting performance of the Bayesian-based approach.
Quantile pyramids for Bayesian nonparametrics
2009-01-01
P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...
Cache-effiziente Block-Matrix-Löser für die Partition of Unity Methode
Gründer, Patrick
2012-01-01
Die Partition of Unity Methode findet Anwendung in gitterlosen Diskretisierungsverfahren zum Lösen elliptischer partieller Differentialgleichungen. Die bei der Diskretisierung entstehenden Gleichungssysteme besitzen eine Blockstruktur, die sich mittels der Multilevel Partition of Unity Methode asymptotisch optimal lösen lassen. Ein alternatives Verfahren zum Lösen dieser Gleichungssysteme stellen die vorkonditionierten Krylow- Unterraumverfahren dar. In dieser Arbeit wird ein auf der ILU-Zerl...
Balancing a U-Shaped Assembly Line by Applying Nested Partitions Method
Energy Technology Data Exchange (ETDEWEB)
Nikhil V. Bhagwat
2005-12-17
In this study, we applied the Nested Partitions method to a U-line balancing problem and conducted experiments to evaluate the application. From the results, it is quite evident that the Nested Partitions method provided near optimal solutions (optimal in some cases). Besides, the execution time is quite short as compared to the Branch and Bound algorithm. However, for larger data sets, the algorithm took significantly longer times for execution. One of the reasons could be the way in which the random samples are generated. In the present study, a random sample is a solution in itself which requires assignment of tasks to various stations. The time taken to assign tasks to stations is directly proportional to the number of tasks. Thus, if the number of tasks increases, the time taken to generate random samples for the different regions also increases. The performance index for the Nested Partitions method in the present study was the number of stations in the random solutions (samples) generated. The total idle time for the samples can be used as another performance index. ULINO method is known to have used a combination of bounds to come up with good solutions. This approach of combining different performance indices can be used to evaluate the random samples and obtain even better solutions. Here, we used deterministic time values for the tasks. In industries where majority of tasks are performed manually, the stochastic version of the problem could be of vital importance. Experimenting with different objective functions (No. of stations was used in this study) could be of some significance to some industries where in the cost associated with creation of a new station is not the same. For such industries, the results obtained by using the present approach will not be of much value. Labor costs, task incompletion costs or a combination of those can be effectively used as alternate objective functions.
Bayesian risk-based decision method for model validation under uncertainty
International Nuclear Information System (INIS)
This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment
International Nuclear Information System (INIS)
In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method
Some bounds on quantum partition functions by path-integral methods
International Nuclear Information System (INIS)
Equilibrium statistical mechanics requires the competition of the partition function. The density matrix and hence the quantum partition function may be expressed as an integral with an integral which can be given explicitly, namely as a (Wiener-) path integral. Techniques especially designed for path integrals provide inequalities for density matrices, partition functions and spectral densities. Some of these inequalities related to density matrices and partition functions are reviewed in this paper. 39 refs
A Bayesian Method For Finding Galaxies That Cause Quasar Absorption Lines
Shoemaker, Emileigh Suzanne; Laubner, David Andrew; Scott, Jennifer E.
2016-01-01
We present a study of candidate absorber-galaxy pairs for 39 low redshift quasar sightlines (0.06 Digital Sky Survey (SDSS). We downloaded the COS linelists for these quasar spectra from MAST and queried the SDSS DR12 database for photometric data on all galaxies within 1 Mpc of each of these quasar lines of sight. We calculated photometric redshifts for all the SDSS galaxies using the Bayesian Photometric Redshift code. We used all these absorber and galaxy data as input into an absorber-galaxy matching code which also employs a Bayesian scheme, along with known statistics of the intergalactic medium and circumgalactic media of galaxies, for finding the most probable galaxy match for each absorber. We compare our candidate absorber-galaxy matches to existing studies in the literature and explore trends in the absorber and galaxy properties among the matched and non-matched populations. This method of matching absorbers and galaxies can be used to find targets for follow up spectroscopic studies.
Afreen, Nazia; Naqvi, Irshad H; Broor, Shobha; Ahmed, Anwar; Kazim, Syed Naqui; Dohare, Ravins; Kumar, Manoj; Parveen, Shama
2016-03-01
Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India. PMID:26977703
Directory of Open Access Journals (Sweden)
Nazia Afreen
2016-03-01
Full Text Available Dengue fever is the most important arboviral disease in the tropical and sub-tropical countries of the world. Delhi, the metropolitan capital state of India, has reported many dengue outbreaks, with the last outbreak occurring in 2013. We have recently reported predominance of dengue virus serotype 2 during 2011-2014 in Delhi. In the present study, we report molecular characterization and evolutionary analysis of dengue serotype 2 viruses which were detected in 2011-2014 in Delhi. Envelope genes of 42 DENV-2 strains were sequenced in the study. All DENV-2 strains grouped within the Cosmopolitan genotype and further clustered into three lineages; Lineage I, II and III. Lineage III replaced lineage I during dengue fever outbreak of 2013. Further, a novel mutation Thr404Ile was detected in the stem region of the envelope protein of a single DENV-2 strain in 2014. Nucleotide substitution rate and time to the most recent common ancestor were determined by molecular clock analysis using Bayesian methods. A change in effective population size of Indian DENV-2 viruses was investigated through Bayesian skyline plot. The study will be a vital road map for investigation of epidemiology and evolutionary pattern of dengue viruses in India.
Fu, Zhiqiang; Chen, Jingwen; Li, Xuehua; Wang, Ya'nan; Yu, Haiying
2016-04-01
The octanol-air partition coefficient (KOA) is needed for assessing multimedia transport and bioaccumulability of organic chemicals in the environment. As experimental determination of KOA for various chemicals is costly and laborious, development of KOA estimation methods is necessary. We investigated three methods for KOA prediction, conventional quantitative structure-activity relationship (QSAR) models based on molecular structural descriptors, group contribution models based on atom-centered fragments, and a novel model that predicts KOA via solvation free energy from air to octanol phase (ΔGO(0)), with a collection of 939 experimental KOA values for 379 compounds at different temperatures (263.15-323.15 K) as validation or training sets. The developed models were evaluated with the OECD guidelines on QSAR models validation and applicability domain (AD) description. Results showed that although the ΔGO(0) model is theoretically sound and has a broad AD, the prediction accuracy of the model is the poorest. The QSAR models perform better than the group contribution models, and have similar predictability and accuracy with the conventional method that estimates KOA from the octanol-water partition coefficient and Henry's law constant. One QSAR model, which can predict KOA at different temperatures, was recommended for application as to assess the long-range transport potential of chemicals. PMID:26802270
Partition method for impact dynamics of flexible multibody systems based on contact constraint
Institute of Scientific and Technical Information of China (English)
段玥晨; 章定国; 洪嘉振
2013-01-01
The impact dynamics of a flexible multibody system is investigated. By using a partition method, the system is divided into two parts, the local impact region and the region away from the impact. The two parts are connected by specific boundary conditions, and the system after partition is equivalent to the original system. According to the rigid-flexible coupling dynamic theory of multibody system, system’s rigid-flexible coupling dynamic equations without impact are derived. A local impulse method for establishing the initial impact conditions is proposed. It satisfies the compatibility con-ditions for contact constraints and the actual physical situation of the impact process of flexible bodies. Based on the contact constraint method, system’s impact dynamic equa-tions are derived in a differential-algebraic form. The contact/separation criterion and the algorithm are given. An impact dynamic simulation is given. The results show that system’s dynamic behaviors including the energy, the deformations, the displacements, and the impact force during the impact process change dramatically. The impact makes great effects on the global dynamics of the system during and after impact.
An urban flood risk assessment method using the Bayesian Network approach
DEFF Research Database (Denmark)
Åström, Helena Lisa Alexandra
Flooding is one of the most damaging natural hazards to human societies. Recent decades have shown that flooding constitutes major threats worldwide, and due to anticipated climate change the occurrence of damaging flood events is expected to increase. Urban areas are especially vulnerable to...... flood risk scoping, flood risk assessment (FRA), and adaptation implementation and involves an ongoing process of assessment, reassessment, and response. This thesis mainly focuses on the FRA phase of FRM. FRA includes hazard analysis and impact assessment (combined called a risk analysis), adaptation...... Bayesian Network (BN) approach is developed, and the method is exemplified in an urban catchment. BNs have become an increasingly popular method for describing complex systems and aiding decision-making under uncertainty. In environmental management, BNs have mainly been utilized in ecological assessments...
Hill, T; Minier, V; Burton, M G; Cunningham, M R
2008-01-01
Concatenating data from the millimetre regime to the infrared, we have performed spectral energy distribution modelling for 227 of the 405 millimetre continuum sources of Hill et al. (2005) which are thought to contain young massive stars in the earliest stages of their formation. Three main parameters are extracted from the fits: temperature, mass and luminosity. The method employed was Bayesian inference, which allows a statistically probable range of suitable values for each parameter to be drawn for each individual protostellar candidate. This is the first application of this method to massive star formation. The cumulative distribution plots of the SED modelled parameters in this work indicate that collectively, the sources without methanol maser and/or radio continuum associations (MM-only cores) display similar characteristics to those of high mass star formation regions. Attributing significance to the marginal distinctions between the MM-only cores and the high-mass star formation sample we draw hypo...
Rubin, Donald B.
1981-01-01
The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...
Development of partitioning method. Back-extraction of uranium from DIDPA solvent
International Nuclear Information System (INIS)
A partitioning method has been developed under the concepts of separation of elements in high level liquid waste generated from nuclear fuel reprocessing according to their half lives and radiological toxicity and of disposal of them by suitable methods. In the partitioning process developed in JAERI solvent, extraction with DIDPA (di-isodecyl phosphoric acid) was adopted for actinide separation. The present paper describes the results of study on back-extraction of hexavalent uranium from DIDPA. Most experiments were carried out to select a suitable reagent for back-extraction of U (VI) extracted from 0.5M nitric acid with DIDPA. The experimental results show that distribution ratios of U (VI) is less than 0.1 in the back-extractions with 1.5M sodium carbonate-15 vol% alcohol or 20wt% hydrazine carbonate-10 vol% alcohol. Uranium in the sodium carbonate solution were recovered by anion-exchange with strong-base resins and eluted by NH4NO3 and other reagents. The results of the present study confirm the validity of the DIDPA extraction process; U, Pu, Np, Am and Cm in HLW are extracted simultaneously with DIDPA, and they are recovered from DIDPA with various reagent: nitric acid for Am and Cm, oxalic acid for Np and Pu, and sodium carbonate or hydrazine carbonate for U. (author)
Development of partitioning method. Back-extraction of uranium from DIDPA solvent
Energy Technology Data Exchange (ETDEWEB)
Tatsugae, Ryozo; Kubota, Masumitsu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Shirahashi, Koichi
1995-03-01
A partitioning method has been developed under the concepts of separation of elements in high level liquid waste generated from nuclear fuel reprocessing according to their half lives and radiological toxicity and of disposal of them by suitable methods. In the partitioning process developed in JAERI solvent, extraction with DIDPA (di-isodecyl phosphoric acid) was adopted for actinide separation. The present paper describes the results of study on back-extraction of hexavalent uranium from DIDPA. Most experiments were carried out to select a suitable reagent for back-extraction of U (VI) extracted from 0.5M nitric acid with DIDPA. The experimental results show that distribution ratios of U (VI) is less than 0.1 in the back-extractions with 1.5M sodium carbonate-15 vol% alcohol or 20wt% hydrazine carbonate-10 vol% alcohol. Uranium in the sodium carbonate solution were recovered by anion-exchange with strong-base resins and eluted by NH{sub 4}NO{sub 3} and other reagents. The results of the present study confirm the validity of the DIDPA extraction process; U, Pu, Np, Am and Cm in HLW are extracted simultaneously with DIDPA, and they are recovered from DIDPA with various reagent: nitric acid for Am and Cm, oxalic acid for Np and Pu, and sodium carbonate or hydrazine carbonate for U. (author).
Zhang, Xianliang; Yan, Xiaodong
2015-11-01
A new statistical downscaling method was developed and applied to downscale monthly total precipitation from 583 stations in China. Generally, there are two steps involved in statistical downscaling: first, the predictors are selected (large-scale variables) and transformed; and second, a model between the predictors and the predictand (in this case, precipitation) is established. In the first step, a selection process of the predictor domain, called the optimum correlation method (OCM), was developed to transform the predictors. The transformed series obtained by the OCM showed much better correlation with the predictand than those obtained by the traditional transform method for the same predictor. Moreover, the method combining OCM and linear regression obtained better downscaling results than the traditional linear regression method, suggesting that the OCM could be used to improve the results of statistical downscaling. In the second step, Bayesian model averaging (BMA) was adopted as an alternative to linear regression. The method combining the OCM and BMA showed much better performance than the method combining the OCM and linear regression. Thus, BMA could be used as an alternative to linear regression in the second step of statistical downscaling. In conclusion, the downscaling method combining OCM and BMA produces more accurate results than the multiple linear regression method when used to statistically downscale large-scale variables.
Suspected pulmonary embolism and lung scan interpretation: Trial of a Bayesian reporting method
International Nuclear Information System (INIS)
The objective of this research is to determine whether a Bayesian method of lung scan (LS) reporting could influence the management of patients with suspected pulmonary embolism (PE). The study is performed by the following: (1) A descriptive study of the diagnostic process for suspected PE using the new reporting method; (2) a non-experimental evaluation of the reporting method comparing prospective patients and historical controls; and (3) a survey of physicians' reactions to the reporting innovation. Of 148 consecutive patients enrolled at the time of LS, 129 were completely evaluated; 75 patients scanned the previous year served as controls. The LS results of patients with suspected PE were reported as posttest probabilities of PE calculated from physician-provided pretest probabilities and the likelihood ratios for PE of LS interpretations. Despite the Bayesian intervention, the confirmation or exclusion of PE was often based on inconclusive evidence. PE was considered by the clinician to be ruled out in 98% of patients with posttest probabilities less than 25% and ruled in for 95% of patients with posttest probabilities greater than 75%. Prospective patients and historical controls were similar in terms of tests ordered after the LS (e.g., pulmonary angiography). Patients with intermediate or indeterminate lung scan results had the highest proportion of subsequent testing. Most physicians (80%) found the reporting innovation to be helpful, either because it confirmed clinical judgement (94 cases) or because it led to additional testing (7 cases). Despite the probabilistic guidance provided by the study, the diagnosis of PE was often neither clearly established nor excluded. While physicians appreciated the innovation and were not confused by the terminology, their clinical decision making was not clearly enhanced
Linkov, Igor; Massey, Olivia; Keisler, Jeff; Rusyn, Ivan; Hartung, Thomas
2015-01-01
"Weighing" available evidence in the process of decision-making is unavoidable, yet it is one step that routinely raises suspicions: what evidence should be used, how much does it weigh, and whose thumb may be tipping the scales? This commentary aims to evaluate the current state and future roles of various types of evidence for hazard assessment as it applies to environmental health. In its recent evaluation of the US Environmental Protection Agency's Integrated Risk Information System assessment process, the National Research Council committee singled out the term "weight of evidence" (WoE) for critique, deeming the process too vague and detractive to the practice of evaluating human health risks of chemicals. Moving the methodology away from qualitative, vague and controversial methods towards generalizable, quantitative and transparent methods for appropriately managing diverse lines of evidence is paramount for both regulatory and public acceptance of the hazard assessments. The choice of terminology notwithstanding, a number of recent Bayesian WoE-based methods, the emergence of multi criteria decision analysis for WoE applications, as well as the general principles behind the foundational concepts of WoE, show promise in how to move forward and regain trust in the data integration step of the assessments. We offer our thoughts on the current state of WoE as a whole and while we acknowledge that many WoE applications have been largely qualitative and subjective in nature, we see this as an opportunity to turn WoE towards a quantitative direction that includes Bayesian and multi criteria decision analysis. PMID:25592482
Fernández de Gatta, M M; Tamayo, M; García, M J; Amador, D; Rey, F; Gutiérrez, J R; Domínguez-Gil Hurlé, A
1989-11-01
The aim of the present study was to characterize the kinetic behavior of imipramine (IMI) and desipramine in enuretic children and to evaluate the performance of different methods for dosage prediction based on individual and/or population data. The study was carried out in 135 enuretic children (93 boys) ranging in age between 5 and 13 years undergoing treatment with IMI in variable single doses (25-75 mg/day) administered at night. Sampling time was one-half the dosage interval at steady state. The number of data available for each patient varied (1-4) and was essentially limited by clinical criteria. Pharmacokinetic calculations were performed using a simple proportional relationship (method 1) and a multiple nonlinear regression program (MULTI 2 BAYES) with two different options: using the ordinary least-squares method (method 2) and the least-squares method based on the Bayesian algorithm (method 3). The results obtained point to a coefficient of variation for the level/dose ratio of the drug (58%) that is significantly lower than that of the metabolite (101.4%). The forecasting capacity of method 1 is deficient both regarding accuracy [mean prediction error (MPE) = -5.48 +/- 69.15] and precision (root mean squared error = 46.42 +/- 51.39). The standard deviation of the MPE (69) makes the method unacceptable from the clinical point of view. The more information that is available concerning the serum levels, the greater are the accuracy and precision of methods (2 and 3). With the Bayesian method, less information on drug serum levels is needed to achieve clinically acceptable predictions. PMID:2595743
DEFF Research Database (Denmark)
Burgess, Stephen; Thompson, Simon G; Andrews, G;
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context of...... multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable. This...... can be implemented using a Bayesian approach, which is next extended to include multiple genetic markers. We then propose a hierarchical model for undertaking a meta-analysis of multiple studies, in which it is not necessary that the same genetic markers are measured in each study. This provides an...
DEFF Research Database (Denmark)
Oh, Geok Lian
This PhD study examines the use of seismic technology for the problem of detecting underground facilities, whereby a seismic source such as a sledgehammer is used to generate seismic waves through the ground, sensed by an array of seismic sensors on the ground surface, and recorded by the digital...... device. The concept is similar to the techniques used in exploration seismology, in which explosions (that occur at or below the surface) or vibration wave-fronts generated at the surface reflect and refract off structures at the ground depth, so as to generate the ground profile of the elastic material...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...
Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods
Ferraioli, Luigi; Plagnol, Eric
2012-01-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Adaptive Methods within a Sequential Bayesian Approach for Structural Health Monitoring
Huff, Daniel W.
Structural integrity is an important characteristic of performance for critical components used in applications such as aeronautics, materials, construction and transportation. When appraising the structural integrity of these components, evaluation methods must be accurate. In addition to possessing capability to perform damage detection, the ability to monitor the level of damage over time can provide extremely useful information in assessing the operational worthiness of a structure and in determining whether the structure should be repaired or removed from service. In this work, a sequential Bayesian approach with active sensing is employed for monitoring crack growth within fatigue-loaded materials. The monitoring approach is based on predicting crack damage state dynamics and modeling crack length observations. Since fatigue loading of a structural component can change while in service, an interacting multiple model technique is employed to estimate probabilities of different loading modes and incorporate this information in the crack length estimation problem. For the observation model, features are obtained from regions of high signal energy in the time-frequency plane and modeled for each crack length damage condition. Although this observation model approach exhibits high classification accuracy, the resolution characteristics can change depending upon the extent of the damage. Therefore, several different transmission waveforms and receiver sensors are considered to create multiple modes for making observations of crack damage. Resolution characteristics of the different observation modes are assessed using a predicted mean squared error criterion and observations are obtained using the predicted, optimal observation modes based on these characteristics. Calculation of the predicted mean square error metric can be computationally intensive, especially if performed in real time, and an approximation method is proposed. With this approach, the real time
Jiang, Ruifen; Lin, Wei; Wen, Sijia; Zhu, Fang; Luan, Tiangang; Ouyang, Gangfeng
2015-08-01
A fully automated solid phase microextraction (SPME) depletion method was developed to study the partition coefficient of organic compound between complex matrix and water sample. The SPME depletion process was conducted by pre-loading the fiber with a specific amount of organic compounds from a proposed standard gas generation vial, and then desorbing the fiber into the targeted samples. Based on the proposed method, the partition coefficients (Kmatrix) of 4 polyaromatic hydrocarbons (PAHs) between humic acid (HA)/hydroxypropyl-β-cyclodextrin (β-HPCD) and aqueous sample were determined. The results showed that the logKmatrix of 4 PAHs with HA and β-HPCD ranged from 3.19 to 4.08, and 2.45 to 3.15, respectively. In addition, the logKmatrix values decreased about 0.12-0.27 log units for different PAHs for every 10°C increase in temperature. The effect of temperature on the partition coefficient followed van't Hoff plot, and the partition coefficient at any temperature can be predicted based on the plot. Furthermore, the proposed method was applied for the real biological fluid analysis. The partition coefficients of 6 PAHs between the complex matrices in the fetal bovine serum and water were determined, and compared to ones obtained from SPME extraction method. The result demonstrated that the proposed method can be applied to determine the sorption coefficients of hydrophobic compounds between complex matrix and water in a variety of samples. PMID:26118804
Shoemaker, Christine; Espinet, Antoine; Pang, Min
2015-04-01
Models of complex environmental systems can be computationally expensive in order to describe the dynamic interactions of the many components over a sizeable time period. Diagnostics of these systems can include forward simulations of calibrated models under uncertainty and analysis of alternatives of systems management. This discussion will focus on applications of new surrogate optimization and uncertainty analysis methods to environmental models that can enhance our ability to extract information and understanding. For complex models, optimization and especially uncertainty analysis can require a large number of model simulations, which is not feasible for computationally expensive models. Surrogate response surfaces can be used in Global Optimization and Uncertainty methods to obtain accurate answers with far fewer model evaluations, which made the methods practical for computationally expensive models for which conventional methods are not feasible. In this paper we will discuss the application of the SOARS surrogate method for estimating Bayesian posterior density functions for model parameters for a TOUGH2 model of geologic carbon sequestration. We will also briefly discuss new parallel surrogate global optimization algorithm applied to two groundwater remediation sites that was implemented on a supercomputer with up to 64 processors. The applications will illustrate the use of these methods to predict the impact of monitoring and management on subsurface contaminants.
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty.
Directory of Open Access Journals (Sweden)
Kai Zhang
Full Text Available In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method, for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model's parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively 'important' elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.
Locating disease genes using Bayesian variable selection with the Haseman-Elston method
Directory of Open Access Journals (Sweden)
He Qimei
2003-12-01
Full Text Available Abstract Background We applied stochastic search variable selection (SSVS, a Bayesian model selection method, to the simulated data of Genetic Analysis Workshop 13. We used SSVS with the revisited Haseman-Elston method to find the markers linked to the loci determining change in cholesterol over time. To study gene-gene interaction (epistasis and gene-environment interaction, we adopted prior structures, which incorporate the relationship among the predictors. This allows SSVS to search in the model space more efficiently and avoid the less likely models. Results In applying SSVS, instead of looking at the posterior distribution of each of the candidate models, which is sensitive to the setting of the prior, we ranked the candidate variables (markers according to their marginal posterior probability, which was shown to be more robust to the prior. Compared with traditional methods that consider one marker at a time, our method considers all markers simultaneously and obtains more favorable results. Conclusions We showed that SSVS is a powerful method for identifying linked markers using the Haseman-Elston method, even for weak effects. SSVS is very effective because it does a smart search over the entire model space.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach
Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.
2015-04-01
Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.
keyvanpour, Mohammadreza
2011-01-01
Due to the advances in hardware technology and increase in production of multimedia data in many applications, during the last decades, multimedia databases have become increasingly important. Contentbased multimedia retrieval is one of an important research area in the field of multimedia databases. Lots of research on this field has led to proposition of different kinds of index structures to support fast and efficient similarity search to retrieve multimedia data from these databases. Due to variety and plenty of proposed index structures, we suggest a systematic framework based on partitioning method used in these structures to classify multimedia index structures, and then we evaluated these structures based on important functional measures. We hope this proposed framework will lead to empirical and technical comparison of multimedia index structures and development of more efficient structures at future.
Surveillance system and method having parameter estimation and operating mode partitioning
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. PMID:26708167
Self-Organizing Genetic Algorithm Based Method for Constructing Bayesian Networks from Databases
Institute of Scientific and Technical Information of China (English)
郑建军; 刘玉树; 陈立潮
2003-01-01
The typical characteristic of the topology of Bayesian networks (BNs) is the interdependence among different nodes (variables), which makes it impossible to optimize one variable independently of others, and the learning of BNs structures by general genetic algorithms is liable to converge to local extremum. To resolve efficiently this problem, a self-organizing genetic algorithm (SGA) based method for constructing BNs from databases is presented. This method makes use of a self-organizing mechanism to develop a genetic algorithm that extended the crossover operator from one to two, providing mutual competition between them, even adjusting the numbers of parents in recombination (crossover/recomposition) schemes. With the K2 algorithm, this method also optimizes the genetic operators, and utilizes adequately the domain knowledge. As a result, with this method it is able to find a global optimum of the topology of BNs, avoiding premature convergence to local extremum. The experimental results proved to be and the convergence of the SGA was discussed.
Effective updating process of seismic fragilities using Bayesian method and information entropy
International Nuclear Information System (INIS)
Seismic probabilistic safety assessment (SPSA) is an effective method for evaluating overall performance of seismic safety of a plant. Seismic fragilities are estimated to quantify the seismically induced accident sequences. It is a great concern that the SPSA results involve uncertainties, a part of which comes from the uncertainty in the seismic fragility of equipment and systems. A straightforward approach to reduce the uncertainty is to perform a seismic qualification test and to reflect the results on the seismic fragility estimate. In this paper, we propose a figure-of-merit to find the most cost-effective condition of the seismic qualification tests about the acceleration level and number of components tested. Then a mathematical method to reflect the test results on the fragility update is developed. A Bayesian method is used for the fragility update procedure. Since a lognormal distribution that is used for the fragility model does not have a Bayes conjugate function, a parameterization method is proposed so that the posterior distribution expresses the characteristics of the fragility. The information entropy is used as the figure-of-merit to express importance of obtained evidence. It is found that the information entropy is strongly associated with the uncertainty of the fragility. (author)
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Stress partitioning behavior in an fcc alloy evaluated by the in situ/ex situ EBSD-Wilkinson method
International Nuclear Information System (INIS)
Hierarchical stress partitioning behavior among grains in the elasto-plastic region of a polycrystalline material was studied by a combined technique of in situ/ex situ electron backscattering diffraction based on local strain measurements (the EBSD-Wilkinson method) and neutron diffraction measurements during tensile deformation. Elastic strains parallel to the tensile direction both during loading (e11) and after unloading (e'11) were measured. The volume-averaged stress partitioning among [hkl] family grains measured by the EBSD-Wilkinson method was in good agreement with that measured by neutron diffraction measurements, but a more complicated strain distribution occurred microscopically because of restriction from the surrounding grains.
O'Reilly, Joseph E; Puttick, Mark N; Parry, Luke; Tanner, Alastair R; Tarver, James E; Fleming, James; Pisani, Davide; Donoghue, Philip C J
2016-04-01
Different analytical methods can yield competing interpretations of evolutionary history and, currently, there is no definitive method for phylogenetic reconstruction using morphological data. Parsimony has been the primary method for analysing morphological data, but there has been a resurgence of interest in the likelihood-based Mk-model. Here, we test the performance of the Bayesian implementation of the Mk-model relative to both equal and implied-weight implementations of parsimony. Using simulated morphological data, we demonstrate that the Mk-model outperforms equal-weights parsimony in terms of topological accuracy, and implied-weights performs the most poorly. However, the Mk-model produces phylogenies that have less resolution than parsimony methods. This difference in the accuracy and precision of parsimony and Bayesian approaches to topology estimation needs to be considered when selecting a method for phylogeny reconstruction. PMID:27095266
Larson, Nicholas B; McDonnell, Shannon; Albright, Lisa Cannon; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham; MacInnis, Robert; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catolona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2016-09-01
Rare variants (RVs) have been shown to be significant contributors to complex disease risk. By definition, these variants have very low minor allele frequencies and traditional single-marker methods for statistical analysis are underpowered for typical sequencing study sample sizes. Multimarker burden-type approaches attempt to identify aggregation of RVs across case-control status by analyzing relatively small partitions of the genome, such as genes. However, it is generally the case that the aggregative measure would be a mixture of causal and neutral variants, and these omnibus tests do not directly provide any indication of which RVs may be driving a given association. Recently, Bayesian variable selection approaches have been proposed to identify RV associations from a large set of RVs under consideration. Although these approaches have been shown to be powerful at detecting associations at the RV level, there are often computational limitations on the total quantity of RVs under consideration and compromises are necessary for large-scale application. Here, we propose a computationally efficient alternative formulation of this method using a probit regression approach specifically capable of simultaneously analyzing hundreds to thousands of RVs. We evaluate our approach to detect causal variation on simulated data and examine sensitivity and specificity in instances of high RV dimensionality as well as apply it to pathway-level RV analysis results from a prostate cancer (PC) risk case-control sequencing study. Finally, we discuss potential extensions and future directions of this work. PMID:27312771
An automated method for estimating reliability of grid systems using Bayesian networks
International Nuclear Information System (INIS)
Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.
Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method
International Nuclear Information System (INIS)
The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options
Converse, Sarah J.; Chandler, J. N.; Olsen, G.H.; Shafer, C. C.
2010-01-01
In captive-rearing programs, small sample sizes can limit the quality of information on performance of propagation methods. Bayesian updating can be used to increase information on method performance over time. We demonstrate an application to incubator testing at USGS Patuxent Wildlife Research Center. A new type of incubator was purchased for use in the whooping crane (Grus americana) propagation program, which produces birds for release. We tested the new incubator for reliability, using sandhill crane (Grus canadensis) eggs as surrogates. We determined that the new incubator should result in hatching rates no more than 5% lower than the available incubators, with 95% confidence, before it would be used to incubate whooping crane eggs. In 2007, 5 healthy chicks hatched from 12 eggs in the new incubator, and 2 hatched from 5 in an available incubator, for a median posterior difference of method, where a veterinarian determined whether eggs produced chicks that, at hatching, had no apparent health problems that would impede future release. We used the 2007 estimates as priors in the 2008 analysis. In 2008, 7 normal chicks hatched from 15 eggs in the new incubator, and 11 hatched from 15 in an available incubator, for a median posterior difference of 19%, with 95% credible interval (-8%, 44%). The increased sample size has increased our understanding of incubator performance. While additional data will be collected, at this time the new incubator does not appear adequate for use with whooping crane eggs.
International Nuclear Information System (INIS)
The complex nature of inertial confinement fusion (ICF) experiments results in a very large number of experimental parameters which, when combined with the myriad physical models that govern target evolution, make the reliable extraction of physics from experimental campaigns very difficult. We develop an inference method that allows all important experimental parameters, and previous knowledge, to be taken into account when investigating underlying microphysics models. The result is framed as a modified χ2 analysis which is easy to implement in existing analyses, and quite portable. We present a first application to a recent convergent ablator experiment performed at the National Ignition Facility (NIF), and investigate the effect of variations in all physical dimensions of the target (very difficult to do using other methods). We show that for well characterized targets in which dimensions vary at the 0.5% level there is little effect, but 3% variations change the results of inferences dramatically. Our Bayesian method allows particular inference results to be associated with prior errors in microphysics models; in our example, tuning the carbon opacity to match experimental data (i.e. ignoring prior knowledge) is equivalent to an assumed prior error of 400% in the tabop opacity tables. This large error is unreasonable, underlining the importance of including prior knowledge in the analysis of these experiments. (paper)
Russo, T. A.; Devineni, N.; Lall, U.
2015-12-01
Lasting success of the Green Revolution in Punjab, India relies on continued availability of local water resources. Supplying primarily rice and wheat for the rest of India, Punjab supports crop irrigation with a canal system and groundwater, which is vastly over-exploited. The detailed data required to physically model future impacts on water supplies agricultural production is not readily available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements for an under-constrained mass balance model. Using measured values of historical precipitation, total canal water delivery, crop yield, and water table elevation, we present a method using a Markov chain Monte Carlo (MCMC) algorithm to solve for a distribution of values for each unknown parameter in a conceptual mass balance model. Due to heterogeneity across the state, and the resolution of input data, we estimate model parameters at the district-scale using spatial pooling. The resulting model is used to predict the impact of precipitation change scenarios on groundwater availability under multiple cropping options. Predicted groundwater declines vary across the state, suggesting that crop selection and water management strategies should be determined at a local scale. This computational method can be applied in data-scarce regions across the world, where water resource management is required to resolve competition between food security and available resources in a changing climate.
International Nuclear Information System (INIS)
Partitioning and Transmutation (P and T) of Minor Actinide elements (MAs) arising out of the back-end of the fuel cycle would be one of the key-steps in any future sustainable nuclear fuel cycle. Several Member States and international organizations are recently evaluating various fuel cycle scenarios to incorporate P and T techniques as improvement of the conventional fuel cycles. Assessment of different fuel cycle strategies that incorporate P and T in comparison with 'Once-through fuel cycle' are being carried out globally and crucial future requirements are being identified. Pyro-chemical separation methods would form a critical stage of some advanced fuel cycles (e.g. so called 'double strata'), providing for separation of long lived fissionable elements, reducing in this way a potential environmental impact and proliferation risk of the back-end of the fuel-cycle. Minimization of MAs in the disposable waste fraction during the separation process can contribute considerably to an improved protection of the environment. The Agency has initiated a Coordinated Research Programme on this subject in the aim to co-ordinate R and D activities in the area of the minimization of process losses in pyro-chemical partitioning methods. The presentation focuses on the identification of development need of various pertinent areas such as: i) appraisal of minimization of process losses in separation processes and subsequent flow-sheet adjustment; ii) advanced characterization methods to characterize process losses; iii) establishing the criteria for the separation processes; iv) development of a list of critical radio-nuclides inventories; v) basic studies to compare dry partitioning (pyro-processing) with aqueous partitioning process; vi) assessment of environmental impact associated with partitioning processes; vii) identification of proliferation-resistance attributes of partitioning methods
Brody, Samuel; Lapata, Mirella
2009-01-01
Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...
Bayesian Inference for Functional Dynamics Exploring in fMRI Data
Directory of Open Access Journals (Sweden)
Xuan Guo
2016-01-01
Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
The Method of Oilfield Development Risk Forecasting and Early Warning Using Revised Bayesian Network
Directory of Open Access Journals (Sweden)
Yihua Zhong
2016-01-01
Full Text Available Oilfield development aiming at crude oil production is an extremely complex process, which involves many uncertain risk factors affecting oil output. Thus, risk prediction and early warning about oilfield development may insure operating and managing oilfields efficiently to meet the oil production plan of the country and sustainable development of oilfields. However, scholars and practitioners in the all world are seldom concerned with the risk problem of oilfield block development. The early warning index system of blocks development which includes the monitoring index and planning index was refined and formulated on the basis of researching and analyzing the theory of risk forecasting and early warning as well as the oilfield development. Based on the indexes of warning situation predicted by neural network, the method dividing the interval of warning degrees was presented by “3σ” rule; and a new method about forecasting and early warning of risk was proposed by introducing neural network to Bayesian networks. Case study shows that the results obtained in this paper are right and helpful to the management of oilfield development risk.
Hoang, Nguyen Tien; Koike, Katsuaki
2015-10-01
It has been generally accepted that hyperspectral remote sensing is more effective and provides greater accuracy than multispectral remote sensing in many application fields. EO-1 Hyperion, a representative hyperspectral sensor, has much more spectral bands, while Landsat data has much wider image scene and longer continuous space-based record of Earth's land. This study aims to develop a new method, Pseudo-Hyperspectral Image Synthesis Algorithm (PHISA), to transform Landsat imagery into pseudo hyperspectral imagery using the correlation between Landsat and EO-1 Hyperion data. At first Hyperion scene was precisely pre-processed and co-registered to Landsat scene, and both data were corrected for atmospheric effects. Bayesian model averaging method (BMA) was applied to select the best model from a class of several possible models. Subsequently, this best model is utilized to calculate pseudo-hyperspectral data by R programming. Based on the selection results by BMA, we transform Landsat imagery into 155 bands of pseudo-hyperspectral imagery. Most models have multiple R-squared values higher than 90%, which assures high accuracy of the models. There are no significant differences visually between the pseudo- and original data. Most bands have Pearson's coefficients coefficients < 0.93 like outliers in the data sets. In a similar manner, most Root Mean Square Error values are considerably low, smaller than 0.014. These observations strongly support that the proposed PHISA is valid for transforming Landsat data into pseudo-hyperspectral data from the outlook of statistics.
Bijaoui, A.
2013-03-01
The image restoration is today an important part of the astrophysical data analysis. The denoising and the deblurring can be efficiently performed using multiscale transforms. The multiresolution analysis constitutes the fundamental pillar for these transforms. The discrete wavelet transform is introduced from the theory of the approximation by translated functions. The continuous wavelet transform carries out a generalization of multiscale representations from translated and dilated wavelets. The à trous algorithm furnishes its discrete redundant transform. The image denoising is first considered without any hypothesis on the signal distribution, on the basis of the a contrario detection. Different softening functions are introduced. The introduction of a regularization constraint may improve the results. The application of Bayesian methods leads to an automated adaptation of the softening function to the signal distribution. The MAP principle leads to the basis pursuit, a sparse decomposition on redundant dictionaries. Nevertheless the posterior expectation minimizes, scale per scale, the quadratic error. The proposed deconvolution algorithm is based on a coupling of the wavelet denoising with an iterative inversion algorithm. The different methods are illustrated by numerical experiments on a simulated image similar to images of the deep sky. A white Gaussian stationary noise was added with three levels. In the conclusion different important connected problems are tackled.
Assessment of Agricultural Water Management in Punjab, India using Bayesian Methods
Russo, T. A.; Devineni, N.; Lall, U.; Sidhu, R.
2013-12-01
The success of the Green Revolution in Punjab, India is threatened by the declining water table (approx. 1 m/yr). Punjab, a major agricultural supplier for the rest of India, supports irrigation with a canal system and groundwater, which is vastly over-exploited. Groundwater development in many districts is greater than 200% the annual recharge rate. The hydrologic data required to complete a mass-balance model are not available for this region, therefore we use Bayesian methods to estimate hydrologic properties and irrigation requirements. Using the known values of precipitation, total canal water delivery, crop yield, and water table elevation, we solve for each unknown parameter (often a coefficient) using a Markov chain Monte Carlo (MCMC) algorithm. Results provide regional estimates of irrigation requirements and groundwater recharge rates under observed climate conditions (1972 to 2002). Model results are used to estimate future water availability and demand to help inform agriculture management decisions under projected climate conditions. We find that changing cropping patterns for the region can maintain food production while balancing groundwater pumping with natural recharge. This computational method can be applied in data-scarce regions across the world, where agricultural water management is required to resolve competition between food security and changing resource availability.
Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.
2016-07-01
We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).
Hug, Sabine Carolin
2015-01-01
In this thesis we use differential equations for mathematically representing biological processes. For this we have to infer the associated parameters for fitting the differential equations to measurement data. If the structure of the ODE itself is uncertain, model selection methods have to be applied. We refine several existing Bayesian methods, ranging from an adaptive scheme for the computation of high-dimensional integrals to multi-chain Metropolis-Hastings algorithms for high-dimensional...
Bickel, David R.
2011-01-01
The following zero-sum game between nature and a statistician blends Bayesian methods with frequentist methods such as p-values and confidence intervals. Nature chooses a posterior distribution consistent with a set of possible priors. At the same time, the statistician selects a parameter distribution for inference with the goal of maximizing the minimum Kullback-Leibler information gained over a confidence distribution or other benchmark distribution. An application to testing a simple null...
Ferragina, A; de los Campos, G; Vazquez, A I; Cecchinato, A; Bittante, G
2015-11-01
The aim of this study was to assess the performance of Bayesian models commonly used for genomic selection to predict "difficult-to-predict" dairy traits, such as milk fatty acid (FA) expressed as percentage of total fatty acids, and technological properties, such as fresh cheese yield and protein recovery, using Fourier-transform infrared (FTIR) spectral data. Our main hypothesis was that Bayesian models that can estimate shrinkage and perform variable selection may improve our ability to predict FA traits and technological traits above and beyond what can be achieved using the current calibration models (e.g., partial least squares, PLS). To this end, we assessed a series of Bayesian methods and compared their prediction performance with that of PLS. The comparison between models was done using the same sets of data (i.e., same samples, same variability, same spectral treatment) for each trait. Data consisted of 1,264 individual milk samples collected from Brown Swiss cows for which gas chromatographic FA composition, milk coagulation properties, and cheese-yield traits were available. For each sample, 2 spectra in the infrared region from 5,011 to 925 cm(-1) were available and averaged before data analysis. Three Bayesian models: Bayesian ridge regression (Bayes RR), Bayes A, and Bayes B, and 2 reference models: PLS and modified PLS (MPLS) procedures, were used to calibrate equations for each of the traits. The Bayesian models used were implemented in the R package BGLR (http://cran.r-project.org/web/packages/BGLR/index.html), whereas the PLS and MPLS were those implemented in the WinISI II software (Infrasoft International LLC, State College, PA). Prediction accuracy was estimated for each trait and model using 25 replicates of a training-testing validation procedure. Compared with PLS, which is currently the most widely used calibration method, MPLS and the 3 Bayesian methods showed significantly greater prediction accuracy. Accuracy increased in moving from
Directory of Open Access Journals (Sweden)
Mike Lonergan
2011-01-01
Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.
Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...
Mbakwe, Anthony C; Saka, Anthony A; Choi, Keechoo; Lee, Young-Jae
2016-08-01
Highway traffic accidents all over the world result in more than 1.3 million fatalities annually. An alarming number of these fatalities occurs in developing countries. There are many risk factors that are associated with frequent accidents, heavy loss of lives, and property damage in developing countries. Unfortunately, poor record keeping practices are very difficult obstacle to overcome in striving to obtain a near accurate casualty and safety data. In light of the fact that there are numerous accident causes, any attempts to curb the escalating death and injury rates in developing countries must include the identification of the primary accident causes. This paper, therefore, seeks to show that the Delphi Technique is a suitable alternative method that can be exploited in generating highway traffic accident data through which the major accident causes can be identified. In order to authenticate the technique used, Korea, a country that underwent similar problems when it was in its early stages of development in addition to the availability of excellent highway safety records in its database, is chosen and utilized for this purpose. Validation of the methodology confirms the technique is suitable for application in developing countries. Furthermore, the Delphi Technique, in combination with the Bayesian Network Model, is utilized in modeling highway traffic accidents and forecasting accident rates in the countries of research. PMID:27183516
Rapid purification method for fumonisin B1 using centrifugal partition chromatography.
Szekeres, A; Lorántfy, L; Bencsik, O; Kecskeméti, A; Szécsi, Á; Mesterházy, Á; Vágvölgyi, Cs
2013-01-01
Fumonisin B1 (FB1) is a highly toxic mycotoxin produced by fungal strains belonging to the Fusarium genus, which can be found mainly in maize products, and is gaining interest in food safety. To produce large amounts of pure FB1, a novel purifying method was developed by using centrifugal partition chromatography, which is a prominent member of the liquid-liquid chromatographic techniques. Rice cultured with Fusarium verticillioides was extracted with a mixture of methanol/water and found to contain 0.87 mg of FB1 per gram. The crude extracts were purified on a strong anion-exchange column and then separated by using a biphasic solvent system consisting of methyl-tert-butyl-ether-acetonitrile-0.1% formic acid in water. The collected fractions were analysed by flow injection-mass spectrometry and high-performance liquid chromatography coupled with Corona-charged aerosol detector and identified by congruent retention time on high-performance liquid chromatography and mass spectrometric data. This method produced approximately 120 mg of FB1 with a purity of more than 98% from 200 g of the rice culture. The whole purification process is able to produce a large amount of pure FB1 for analytical applications or for toxicological studies. PMID:23043634