WorldWideScience

Sample records for bayesian statistical control

  1. Bayesian statistics

    OpenAIRE

    新家, 健精

    2013-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  2. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  3. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  4. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  5. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  6. Bayesian Methods for Statistical Analysis

    OpenAIRE

    Puza, Borek

    2015-01-01

    Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...

  7. Introduction to Bayesian statistics

    CERN Document Server

    Koch, Karl-Rudolf

    2007-01-01

    This book presents Bayes' theorem, the estimation of unknown parameters, the determination of confidence regions and the derivation of tests of hypotheses for the unknown parameters. It does so in a simple manner that is easy to comprehend. The book compares traditional and Bayesian methods with the rules of probability presented in a logical way allowing an intuitive understanding of random variables and their probability distributions to be formed.

  8. How to practise Bayesian statistics outside the Bayesian church: What philosophy for Bayesian statistical modelling?

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science primar

  9. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  10. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  11. Bayesian Statistics for Biological Data: Pedigree Analysis

    Science.gov (United States)

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  12. Controlling the degree of caution in statistical inference with the Bayesian and frequentist approaches as opposite extremes

    CERN Document Server

    Bickel, David R

    2011-01-01

    In statistical practice, whether a Bayesian or frequentist approach is used in inference depends not only on the availability of prior information but also on the attitude taken toward partial prior information, with frequentists tending to be more cautious than Bayesians. The proposed framework defines that attitude in terms of a specified amount of caution, thereby enabling data analysis at the level of caution desired and on the basis of any prior information. The caution parameter represents the attitude toward partial prior information in much the same way as a loss function represents the attitude toward risk. When there is very little prior information and nonzero caution, the resulting inferences correspond to those of the candidate confidence intervals and p-values that are most similar to the credible intervals and hypothesis probabilities of the specified Bayesian posterior. On the other hand, in the presence of a known physical distribution of the parameter, inferences are based only on the corres...

  13. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  14. Philosophy and the practice of Bayesian statistics.

    Science.gov (United States)

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework.

  15. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  16. Approximate Bayesian computation with functional statistics.

    Science.gov (United States)

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  17. Bayesian approach to inverse statistical mechanics.

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  18. Bayesian Statistics in Software Engineering: Practical Guide and Case Studies

    OpenAIRE

    Furia, Carlo A.

    2016-01-01

    Statistics comes in two main flavors: frequentist and Bayesian. For historical and technical reasons, frequentist statistics has dominated data analysis in the past; but Bayesian statistics is making a comeback at the forefront of science. In this paper, we give a practical overview of Bayesian statistics and illustrate its main advantages over frequentist statistics for the kinds of analyses that are common in empirical software engineering, where frequentist statistics still is standard. We...

  19. Bayesian Cosmological inference beyond statistical isotropy

    Science.gov (United States)

    Souradeep, Tarun; Das, Santanu; Wandelt, Benjamin

    2016-10-01

    With advent of rich data sets, computationally challenge of inference in cosmology has relied on stochastic sampling method. First, I review the widely used MCMC approach used to infer cosmological parameters and present a adaptive improved implementation SCoPE developed by our group. Next, I present a general method for Bayesian inference of the underlying covariance structure of random fields on a sphere. We employ the Bipolar Spherical Harmonic (BipoSH) representation of general covariance structure on the sphere. We illustrate the efficacy of the method with a principled approach to assess violation of statistical isotropy (SI) in the sky maps of Cosmic Microwave Background (CMB) fluctuations. The general, principled, approach to a Bayesian inference of the covariance structure in a random field on a sphere presented here has huge potential for application to other many aspects of cosmology and astronomy, as well as, more distant areas of research like geosciences and climate modelling.

  20. STATISTICAL BAYESIAN ANALYSIS OF EXPERIMENTAL DATA.

    Directory of Open Access Journals (Sweden)

    AHLAM LABDAOUI

    2012-12-01

    Full Text Available The Bayesian researcher should know the basic ideas underlying Bayesian methodology and the computational tools used in modern Bayesian econometrics.  Some of the most important methods of posterior simulation are Monte Carlo integration, importance sampling, Gibbs sampling and the Metropolis- Hastings algorithm. The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We focus primarily on recent developments in Bayesian computation. Then we focus on particular models. Inevitably, we combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics. We propose to the user of analysis of variance and linear regression model.

  1. Bayesian credible interval construction for Poisson statistics

    Institute of Scientific and Technical Information of China (English)

    ZHU Yong-Sheng

    2008-01-01

    The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.

  2. An introduction to Bayesian statistics in health psychology.

    Science.gov (United States)

    Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske

    2017-09-01

    The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.

  3. Bayesian Information Criterion as an Alternative way of Statistical Inference

    Directory of Open Access Journals (Sweden)

    Nadejda Yu. Gubanova

    2012-05-01

    Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  4. Bayesian Information Criterion as an Alternative way of Statistical Inference

    OpenAIRE

    Nadejda Yu. Gubanova; Simon Zh. Simavoryan

    2012-01-01

    The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  5. Introduction to applied Bayesian statistics and estimation for social scientists

    CERN Document Server

    Lynch, Scott M

    2007-01-01

    ""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed

  6. Fully Bayesian tests of neutrality using genealogical summary statistics

    Directory of Open Access Journals (Sweden)

    Drummond Alexei J

    2008-10-01

    Full Text Available Abstract Background Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Results Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Conclusion Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  7. Understanding data better with Bayesian and global statistical methods

    CERN Document Server

    Press, W H

    1996-01-01

    To understand their data better, astronomers need to use statistical tools that are more advanced than traditional ``freshman lab'' statistics. As an illustration, the problem of combining apparently incompatible measurements of a quantity is presented from both the traditional, and a more sophisticated Bayesian, perspective. Explicit formulas are given for both treatments. Results are shown for the value of the Hubble Constant, and a 95% confidence interval of 66 < H0 < 82 (km/s/Mpc) is obtained.

  8. Some Bayesian statistical techniques useful in estimating frequency and density

    Science.gov (United States)

    Johnson, D.H.

    1977-01-01

    This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.

  9. Avaliação bayesiana de inspetores no controle estatístico de atributos Bayesian evaluation of inspectors in statistical attributes control

    Directory of Open Access Journals (Sweden)

    Roberto da Costa Quinino

    1997-12-01

    Full Text Available Nos testes para atributos é importante avaliar a eficiência dos inspetores que julgam a qualidade do produto. Este trabalho apresenta um método bayesiano para avaliação de inspetores em testes de conformidade e não-conformidade. Avaliações em que não se encontram disponíveis a real classificação dos produtos também são discutidas.When testing for attributes, it is important to assess the inspectors' efficiency as they judge the product quality by classifying it as conforming or non-conforming. This work presents a bayesian method for evaluating sensory inspectors, including discussions about situations in which classifications are made for which the real state of the product is not known.

  10. Use of Bayesian statistical approach in diagnosing secondary hypertension.

    Science.gov (United States)

    Krzych, Lukasz Jerzy

    2008-03-01

    Bayes's theorem is predominantly used in diagnosing based on the results of various diagnostic tests. This statistical approach is intuitive in differential diagnosis as it explicitly takes into consideration data from medical history, physical examination, laboratory findings and imaging. Bayes's theorem states that the probability of disease occurrence (or occurrence of other outcome) after new information is obtained, called a posteriori probability, depends directly on an a priori probability and the value of likelihood ratio associated with a given test result. This paper describes basic Bayesian analysis in relation to the diagnosis of two types of secondary hypertension; primary aldosteronism and pheochromocytoma. This choice is based on two facts; primary aldosteronism is believed to be the most common and the most commonly detected cause of symptomatic hypertension and pheochromocytoma is thought to have rapid progress and stormy clinical course. This article aims to draw physicians' attention to and increase the knowledge of Bayesian analysis, and to describe its use in everyday clinical decision making. On the basis of this theorem's foundations, the discussion in relation to the issue of differential diagnosis between physicians, their patients, and medical students should also improve. When used in practice, one should be aware, however, of Bayesian analysis limitations concerning the diagnostic test application and limited knowledge of diagnostic test accuracy, and insecure or faulty a priori probability estimates.

  11. Bayesian modeling of flexible cognitive control.

    Science.gov (United States)

    Jiang, Jiefeng; Heller, Katherine; Egner, Tobias

    2014-10-01

    "Cognitive control" describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation.

  12. Bayesians versus frequentists a philosophical debate on statistical reasoning

    CERN Document Server

    Vallverdú, Jordi

    2016-01-01

    This book analyzes the origins of statistical thinking as well as its related philosophical questions, such as causality, determinism or chance. Bayesian and frequentist approaches are subjected to a historical, cognitive and epistemological analysis, making it possible to not only compare the two competing theories, but to also find a potential solution. The work pursues a naturalistic approach, proceeding from the existence of numerosity in natural environments to the existence of contemporary formulas and methodologies to heuristic pragmatism, a concept introduced in the book’s final section. This monograph will be of interest to philosophers and historians of science and students in related fields. Despite the mathematical nature of the topic, no statistical background is required, making the book a valuable read for anyone interested in the history of statistics and human cognition.

  13. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  14. Bayesian modeling of flexible cognitive control

    Science.gov (United States)

    Jiang, Jiefeng; Heller, Katherine; Egner, Tobias

    2014-01-01

    “Cognitive control” describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation. PMID:24929218

  15. Bayesian statistics and information fusion for GPS-denied navigation

    Science.gov (United States)

    Copp, Brian Lee

    It is well known that satellite navigation systems are vulnerable to disruption due to jamming, spoofing, or obstruction of the signal. The desire for robust navigation of aircraft in GPS-denied environments has motivated the development of feature-aided navigation systems, in which measurements of environmental features are used to complement the dead reckoning solution produced by an inertial navigation system. Examples of environmental features which can be exploited for navigation include star positions, terrain elevation, terrestrial wireless signals, and features extracted from photographic data. Feature-aided navigation represents a particularly challenging estimation problem because the measurements are often strongly nonlinear, and the quality of the navigation solution is limited by the knowledge of nuisance parameters which may be difficult to model accurately. As a result, integration approaches based on the Kalman filter and its variants may fail to give adequate performance. This project develops a framework for the integration of feature-aided navigation techniques using Bayesian statistics. In this approach, the probability density function for aircraft horizontal position (latitude and longitude) is approximated by a two-dimensional point mass function defined on a rectangular grid. Nuisance parameters are estimated using a hypothesis based approach (Multiple Model Adaptive Estimation) which continuously maintains an accurate probability density even in the presence of strong nonlinearities. The effectiveness of the proposed approach is illustrated by the simulated use of terrain referenced navigation and wireless time-of-arrival positioning to estimate a reference aircraft trajectory. Monte Carlo simulations have shown that accurate position estimates can be obtained in terrain referenced navigation even with a strongly nonlinear altitude bias. The integration of terrain referenced and wireless time-of-arrival measurements is described along with

  16. Fully Bayesian inference for structural MRI: application to segmentation and statistical analysis of T2-hypointensities.

    Science.gov (United States)

    Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark

    2013-01-01

    Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.

  17. Bayesian inference on the sphere beyond statistical isotropy

    CERN Document Server

    Das, Santanu; Souradeep, Tarun

    2015-01-01

    We present a general method for Bayesian inference of the underlying covariance structure of random fields on a sphere. We employ the Bipolar Spherical Harmonic (BipoSH) representation of general covariance structure on the sphere. We illustrate the efficacy of the method as a principled approach to assess violation of statistical isotropy (SI) in the sky maps of Cosmic Microwave Background (CMB) fluctuations. SI violation in observed CMB maps arise due to known physical effects such as Doppler boost and weak lensing; yet unknown theoretical possibilities like cosmic topology and subtle violations of the cosmological principle, as well as, expected observational artefacts of scanning the sky with a non-circular beam, masking, foreground residuals, anisotropic noise, etc. We explicitly demonstrate the recovery of the input SI violation signals with their full statistics in simulated CMB maps. Our formalism easily adapts to exploring parametric physical models with non-SI covariance, as we illustrate for the in...

  18. STATISTICAL ANALYSIS OF THE TM- MODEL VIA BAYESIAN APPROACH

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2012-11-01

    Full Text Available The method of paired comparisons calls for the comparison of treatments presented in pairs to judges who prefer the better one based on their sensory evaluations. Thurstone (1927 and Mosteller (1951 employ the method of maximum likelihood to estimate the parameters of the Thurstone-Mosteller model for the paired comparisons. A Bayesian analysis of the said model using the non-informative reference (Jeffreys prior is presented in this study. The posterior estimates (means and joint modes of the parameters and the posterior probabilities comparing the two parameters are obtained for the analysis. The predictive probabilities that one treatment (Ti in preferred to any other treatment (Tj in a future single comparison are also computed. In addition, the graphs of the marginal posterior distributions of the individual parameter are drawn. The appropriateness of the model is also tested using the Chi-Square test statistic.

  19. Bayesian Analysis of Multiple Populations I: Statistical and Computational Methods

    CERN Document Server

    Stenning, D C; Robinson, E; van Dyk, D A; von Hippel, T; Sarajedini, A; Stein, N

    2016-01-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations (vanDyk et al. 2009, Stein et al. 2013). Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties---age, metallicity, helium abundance, distance, absorption, and initial mass---are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and al...

  20. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency...

  1. Possible Solution to Publication Bias Through Bayesian Statistics, Including Proper Null Hypothesis Testing

    NARCIS (Netherlands)

    Konijn, Elly A.; van de Schoot, Rens; Winter, Sonja D.; Ferguson, Christopher J.

    2015-01-01

    The present paper argues that an important cause of publication bias resides in traditional frequentist statistics forcing binary decisions. An alternative approach through Bayesian statistics provides various degrees of support for any hypothesis allowing balanced decisions and proper null hypothes

  2. Multivariate Statistical Process Control

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2013-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...

  3. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency....... C ontrol algorithm which is used for minimization of the regulation error realizes a simple count-up-count-d own logic. A new ad aptation algorithm for the knock d etection threshold is also d eveloped . C onfi d ence interval method is used as the b asis for ad aptation. A simple statistical mod el...

  4. Bayesian nonparametric adaptive control using Gaussian processes.

    Science.gov (United States)

    Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A

    2015-03-01

    Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.

  5. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    Science.gov (United States)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  6. Embracing Uncertainty: The Interface of Bayesian Statistics and Cognitive Psychology

    Directory of Open Access Journals (Sweden)

    Judith L. Anderson

    1998-06-01

    Full Text Available Ecologists working in conservation and resource management are discovering the importance of using Bayesian analytic methods to deal explicitly with uncertainty in data analyses and decision making. However, Bayesian procedures require, as inputs and outputs, an idea that is problematic for the human brain: the probability of a hypothesis ("single-event probability". I describe several cognitive concepts closely related to single-event probabilities, and discuss how their interchangeability in the human mind results in "cognitive illusions," apparent deficits in reasoning about uncertainty. Each cognitive illusion implies specific possible pitfalls for the use of single-event probabilities in ecology and resource management. I then discuss recent research in cognitive psychology showing that simple tactics of communication, suggested by an evolutionary perspective on human cognition, help people to process uncertain information more effectively as they read and talk about probabilities. In addition, I suggest that carefully considered standards for methodology and conventions for presentation may also make Bayesian analyses easier to understand.

  7. An improved Bayesian matting method based on image statistic characteristics

    Science.gov (United States)

    Sun, Wei; Luo, Siwei; Wu, Lina

    2015-03-01

    Image matting is an important task in image and video editing and has been studied for more than 30 years. In this paper we propose an improved interactive matting method. Starting from a coarse user-guided trimap, we first perform a color estimation based on texture and color information and use the result to refine the original trimap. Then with the new trimap, we apply soft matting process which is improved Bayesian matting with smoothness constraints. Experimental results on natural image show that this method is useful, especially for the images have similar texture feature in the background or the images which is hard to give a precise trimap.

  8. 关于贝叶斯统计之我见%My view on Bayesian statistics

    Institute of Scientific and Technical Information of China (English)

    殷羽

    2014-01-01

    Bayesian statistics and classical statistics are the two modern mathematical statistics, two university school debate plays a positive role in promoting the development of modern statistical theory. Through comparison of Bayesian statistics and classical statistics, deepen the understanding of Bayesian statistics. This paper also from aspects of economic research, actuarial insurance to introduce the Bayesian statistics.%贝叶斯统计和经典统计是现代数理统计的两大学派,两大学派的争论对现代统计理论的发展起到了积极的促进作用。本文通过贝叶斯统计和经典统计的比较,加深了人们对贝叶斯统计的认识。本文还从经济研究、精算保险研究两个方面介绍了贝叶斯统计的应用。

  9. Statistical Mechanical Development of a Sparse Bayesian Classifier

    Science.gov (United States)

    Uda, Shinsuke; Kabashima, Yoshiyuki

    2005-08-01

    The demand for extracting rules from high dimensional real world data is increasing in various fields. However, the possible redundancy of such data sometimes makes it difficult to obtain a good generalization ability for novel samples. To resolve this problem, we provide a scheme that reduces the effective dimensions of data by pruning redundant components for bicategorical classification based on the Bayesian framework. First, the potential of the proposed method is confirmed in ideal situations using the replica method. Unfortunately, performing the scheme exactly is computationally difficult. So, we next develop a tractable approximation algorithm, which turns out to offer nearly optimal performance in ideal cases when the system size is large. Finally, the efficacy of the developed classifier is experimentally examined for a real world problem of colon cancer classification, which shows that the developed method can be practically useful.

  10. Bayesian statistics for the calibration of the LISA Pathfinder experiment

    Science.gov (United States)

    Armano, M.; Audley, H.; Auger, G.; Binetruy, P.; Born, M.; Bortoluzzi, D.; Brandt, N.; Bursi, A.; Caleno, M.; Cavalleri, A.; Cesarini, A.; Cruise, M.; Danzmann, K.; Diepholz, I.; Dolesi, R.; Dunbar, N.; Ferraioli, L.; Ferroni, V.; Fitzsimons, E.; Freschi, M.; García Marirrodriga, C.; Gerndt, R.; Gesa, L.; Gibert, F.; Giardini, D.; Giusteri, R.; Grimani, C.; Harrison, I.; Heinzel, G.; Hewitson, M.; Hollington, D.; Hueller, M.; Huesler, J.; Inchauspé, H.; Jennrich, O.; Jetzer, P.; Johlander, B.; Karnesis, N.; Kaune, B.; Korsakova, N.; Killow, C.; Lloro, I.; Maarschalkerweerd, R.; Madden, S.; Mance, D.; Martin, V.; Martin-Porqueras, F.; Mateos, I.; McNamara, P.; Mendes, J.; Mitchell, E.; Moroni, A.; Nofrarias, M.; Paczkowski, S.; Perreur-Lloyd, M.; Pivato, P.; Plagnol, E.; Prat, P.; Ragnit, U.; Ramos-Castro, J.; Reiche, J.; Romera Perez, J. A.; Robertson, D.; Rozemeijer, H.; Russano, G.; Sarra, P.; Schleicher, A.; Slutsky, J.; Sopuerta, C. F.; Sumner, T.; Texier, D.; Thorpe, J.; Trenkel, C.; Tu, H. B.; Vitale, S.; Wanner, G.; Ward, H.; Waschke, S.; Wass, P.; Wealthy, D.; Wen, S.; Weber, W.; Wittchen, A.; Zanoni, C.; Ziegler, T.; Zweifel, P.

    2015-05-01

    The main goal of LISA Pathfinder (LPF) mission is to estimate the acceleration noise models of the overall LISA Technology Package (LTP) experiment on-board. This will be of crucial importance for the future space-based Gravitational-Wave (GW) detectors, like eLISA. Here, we present the Bayesian analysis framework to process the planned system identification experiments designed for that purpose. In particular, we focus on the analysis strategies to predict the accuracy of the parameters that describe the system in all degrees of freedom. The data sets were generated during the latest operational simulations organised by the data analysis team and this work is part of the LTPDA Matlab toolbox.

  11. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    Science.gov (United States)

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  12. Group sequential control of overall toxicity incidents in clinical trials - non-Bayesian and Bayesian approaches.

    Science.gov (United States)

    Yu, Jihnhee; Hutson, Alan D; Siddiqui, Adnan H; Kedron, Mary A

    2016-02-01

    In some small clinical trials, toxicity is not a primary endpoint; however, it often has dire effects on patients' quality of life and is even life-threatening. For such clinical trials, rigorous control of the overall incidence of adverse events is desirable, while simultaneously collecting safety information. In this article, we propose group sequential toxicity monitoring strategies to control overall toxicity incidents below a certain level as opposed to performing hypothesis testing, which can be incorporated into an existing study design based on the primary endpoint. We consider two sequential methods: a non-Bayesian approach in which stopping rules are obtained based on the 'future' probability of an excessive toxicity rate; and a Bayesian adaptation modifying the proposed non-Bayesian approach, which can use the information obtained at interim analyses. Through an extensive Monte Carlo study, we show that the Bayesian approach often provides better control of the overall toxicity rate than the non-Bayesian approach. We also investigate adequate toxicity estimation after the studies. We demonstrate the applicability of our proposed methods in controlling the symptomatic intracranial hemorrhage rate for treating acute ischemic stroke patients.

  13. Statistical detection of EEG synchrony using empirical bayesian inference.

    Directory of Open Access Journals (Sweden)

    Archana K Singh

    Full Text Available There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001 for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  14. Statistical detection of EEG synchrony using empirical bayesian inference.

    Science.gov (United States)

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  15. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  16. Probabilistic risk assessment framework for structural systems under multiple hazards using Bayesian statistics

    Energy Technology Data Exchange (ETDEWEB)

    Kwag, Shinyoung [North Carolina State University, Raleigh, NC 27695 (United States); Korea Atomic Energy Research Institute, Daejeon 305-353 (Korea, Republic of); Gupta, Abhinav, E-mail: agupta1@ncsu.edu [North Carolina State University, Raleigh, NC 27695 (United States)

    2017-04-15

    Highlights: • This study presents the development of Bayesian framework for probabilistic risk assessment (PRA) of structural systems under multiple hazards. • The concepts of Bayesian network and Bayesian inference are combined by mapping the traditionally used fault trees into a Bayesian network. • The proposed mapping allows for consideration of dependencies as well as correlations between events. • Incorporation of Bayesian inference permits a novel way for exploration of a scenario that is likely to result in a system level “vulnerability.” - Abstract: Conventional probabilistic risk assessment (PRA) methodologies (USNRC, 1983; IAEA, 1992; EPRI, 1994; Ellingwood, 2001) conduct risk assessment for different external hazards by considering each hazard separately and independent of each other. The risk metric for a specific hazard is evaluated by a convolution of the fragility and the hazard curves. The fragility curve for basic event is obtained by using empirical, experimental, and/or numerical simulation data for a particular hazard. Treating each hazard as an independently can be inappropriate in some cases as certain hazards are statistically correlated or dependent. Examples of such correlated events include but are not limited to flooding induced fire, seismically induced internal or external flooding, or even seismically induced fire. In the current practice, system level risk and consequence sequences are typically calculated using logic trees to express the causative relationship between events. In this paper, we present the results from a study on multi-hazard risk assessment that is conducted using a Bayesian network (BN) with Bayesian inference. The framework can consider statistical dependencies among risks from multiple hazards, allows updating by considering the newly available data/information at any level, and provide a novel way to explore alternative failure scenarios that may exist due to vulnerabilities.

  17. Frontiers in statistical quality control

    CERN Document Server

    Wilrich, Peter-Theodor

    2004-01-01

    This volume treats the four main categories of Statistical Quality Control: General SQC Methodology, On-line Control including Sampling Inspection and Statistical Process Control, Off-line Control with Data Analysis and Experimental Design, and, fields related to Reliability. Experts with international reputation present their newest contributions.

  18. Statistical assignment of DNA sequences using Bayesian phylogenetics

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Huelsenbeck, John P.;

    2008-01-01

    -analysis of previously published ancient DNA data and show that, with high statistical confidence, most of the published sequences are in fact of Neanderthal origin. However, there are several cases of chimeric sequences that are comprised of a combination of both Neanderthal and modern human DNA....

  19. Bayesian statistical analysis of censored data in geotechnical engineering

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob; Denver, Hans

    2000-01-01

    The geotechnical engineer is often faced with the problem ofhow to assess the statistical properties of a soil parameter on the basis ofa sample measured in-situ or in the laboratory with the defect that somevalues have been replaced by interval bounds because the corresponding soilparameter values...

  20. Applied Bayesian statistical studies in biology and medicine

    CERN Document Server

    D’Amore, G; Scalfari, F

    2004-01-01

    It was written on another occasion· that "It is apparent that the scientific culture, if one means production of scientific papers, is growing exponentially, and chaotically, in almost every field of investigation". The biomedical sciences sensu lato and mathematical statistics are no exceptions. One might say then, and with good reason, that another collection of bio­ statistical papers would only add to the overflow and cause even more confusion. Nevertheless, this book may be greeted with some interest if we state that most of the papers in it are the result of a collaboration between biologists and statisticians, and partly the product of the Summer School th "Statistical Inference in Human Biology" which reaches its 10 edition in 2003 (information about the School can be obtained at the Web site http://www2. stat. unibo. itleventilSito%20scuolalindex. htm). is common experience - and not only This is rather important. Indeed, it in Italy - that encounters between statisticians and researchers are spora...

  1. Bayesian analysis of rotating machines - A statistical approach to estimate and track the fundamental frequency

    DEFF Research Database (Denmark)

    Pedersen, Thorkild Find

    2003-01-01

    Rotating and reciprocating mechanical machines emit acoustic noise and vibrations when they operate. Typically, the noise and vibrations are concentrated in narrow frequency bands related to the running speed of the machine. The frequency of the running speed is referred to as the fundamental...... of an adaptive comb filter is derived for tracking non-stationary signals. The estimation problem is then rephrased in terms of the Bayesian statistical framework. In the Bayesian framework both parameters and observations are considered stochastic processes. The result of the estimation is an expression...

  2. Elicitation by design in ecology: using expert opinion to inform priors for Bayesian statistical models.

    Science.gov (United States)

    Choy, Samantha Low; O'Leary, Rebecca; Mengersen, Kerrie

    2009-01-01

    Bayesian statistical modeling has several benefits within an ecological context. In particular, when observed data are limited in sample size or representativeness, then the Bayesian framework provides a mechanism to combine observed data with other "prior" information. Prior information may be obtained from earlier studies, or in their absence, from expert knowledge. This use of the Bayesian framework reflects the scientific "learning cycle," where prior or initial estimates are updated when new data become available. In this paper we outline a framework for statistical design of expert elicitation processes for quantifying such expert knowledge, in a form suitable for input as prior information into Bayesian models. We identify six key elements: determining the purpose and motivation for using prior information; specifying the relevant expert knowledge available; formulating the statistical model; designing effective and efficient numerical encoding; managing uncertainty; and designing a practical elicitation protocol. We demonstrate this framework applies to a variety of situations, with two examples from the ecological literature and three from our experience. Analysis of these examples reveals several recurring important issues affecting practical design of elicitation in ecological problems.

  3. Structural mapping in statistical word problems: A relational reasoning approach to Bayesian inference.

    Science.gov (United States)

    Johnson, Eric D; Tubau, Elisabet

    2016-09-27

    Presenting natural frequencies facilitates Bayesian inferences relative to using percentages. Nevertheless, many people, including highly educated and skilled reasoners, still fail to provide Bayesian responses to these computationally simple problems. We show that the complexity of relational reasoning (e.g., the structural mapping between the presented and requested relations) can help explain the remaining difficulties. With a non-Bayesian inference that required identical arithmetic but afforded a more direct structural mapping, performance was universally high. Furthermore, reducing the relational demands of the task through questions that directed reasoners to use the presented statistics, as compared with questions that prompted the representation of a second, similar sample, also significantly improved reasoning. Distinct error patterns were also observed between these presented- and similar-sample scenarios, which suggested differences in relational-reasoning strategies. On the other hand, while higher numeracy was associated with better Bayesian reasoning, higher-numerate reasoners were not immune to the relational complexity of the task. Together, these findings validate the relational-reasoning view of Bayesian problem solving and highlight the importance of considering not only the presented task structure, but also the complexity of the structural alignment between the presented and requested relations.

  4. Bayesian inference – a way to combine statistical data and semantic analysis meaningfully

    Directory of Open Access Journals (Sweden)

    Eila Lindfors

    2011-11-01

    Full Text Available This article focuses on presenting the possibilities of Bayesian modelling (Finite Mixture Modelling in the semantic analysis of statistically modelled data. The probability of a hypothesis in relation to the data available is an important question in inductive reasoning. Bayesian modelling allows the researcher to use many models at a time and provides tools to evaluate the goodness of different models. The researcher should always be aware that there is no such thing as the exact probability of an exact event. This is the reason for using probabilistic models. Each model presents a different perspective on the phenomenon in focus, and the researcher has to choose the most probable model with a view to previous research and the knowledge available.The idea of Bayesian modelling is illustrated here by presenting two different sets of data, one from craft science research (n=167 and the other (n=63 from educational research (Lindfors, 2007, 2002. The principles of how to build models and how to combine different profiles are described in the light of the research mentioned.Bayesian modelling is an analysis based on calculating probabilities in relation to a specific set of quantitative data. It is a tool for handling data and interpreting it semantically. The reliability of the analysis arises from an argumentation of which model can be selected from the model space as the basis for an interpretation, and on which arguments.Keywords: method, sloyd, Bayesian modelling, student teachersURN:NBN:no-29959

  5. Bayesian Bigot? Statistical Discrimination, Stereotypes, and Employer Decision Making.

    Science.gov (United States)

    Pager, Devah; Karafin, Diana

    2009-01-01

    Much of the debate over the underlying causes of discrimination centers on the rationality of employer decision making. Economic models of statistical discrimination emphasize the cognitive utility of group estimates as a means of dealing with the problems of uncertainty. Sociological and social-psychological models, by contrast, question the accuracy of group-level attributions. Although mean differences may exist between groups on productivity-related characteristics, these differences are often inflated in their application, leading to much larger differences in individual evaluations than would be warranted by actual group-level trait distributions. In this study, the authors examine the nature of employer attitudes about black and white workers and the extent to which these views are calibrated against their direct experiences with workers from each group. They use data from fifty-five in-depth interviews with hiring managers to explore employers' group-level attributions and their direct observations to develop a model of attitude formation and employer learning.

  6. Frontiers in statistical quality control

    CERN Document Server

    Wilrich, Peter-Theodor

    2001-01-01

    The book is a collection of papers presented at the 5th International Workshop on Intelligent Statistical Quality Control in Würzburg, Germany. Contributions deal with methodology and successful industrial applications. They can be grouped in four catagories: Sampling Inspection, Statistical Process Control, Data Analysis and Process Capability Studies and Experimental Design.

  7. Reasoning with data an introduction to traditional and Bayesian statistics using R

    CERN Document Server

    Stanton, Jeffrey M

    2017-01-01

    Engaging and accessible, this book teaches readers how to use inferential statistical thinking to check their assumptions, assess evidence about their beliefs, and avoid overinterpreting results that may look more promising than they really are. It provides step-by-step guidance for using both classical (frequentist) and Bayesian approaches to inference. Statistical techniques covered side by side from both frequentist and Bayesian approaches include hypothesis testing, replication, analysis of variance, calculation of effect sizes, regression, time series analysis, and more. Students also get a complete introduction to the open-source R programming language and its key packages. Throughout the text, simple commands in R demonstrate essential data analysis skills using real-data examples. The companion website provides annotated R code for the book's examples, in-class exercises, supplemental reading lists, and links to online videos, interactive materials, and other resources.

  8. Stochastic structural optimization using particle swarm optimization, surrogate models and Bayesian statistics

    Institute of Scientific and Technical Information of China (English)

    Jongbin Im; Jungsun Park

    2013-01-01

    This paper focuses on a method to solve structural optimization problems using particle swarm optimization (PSO),surrogate models and Bayesian statistics.PSO is a random/stochastic search algorithm designed to find the global optimum.However,PSO needs many evaluations compared to gradient-based optimization.This means PSO increases the analysis costs of structural optimization.One of the methods to reduce computing costs in stochastic optimization is to use approximation techniques.In this work,surrogate models are used,including the response surface method (RSM) and Kriging.When surrogate models are used,there are some errors between exact values and approximated values.These errors decrease the reliability of the optimum values and discard the realistic approximation of using surrogate models.In this paper,Bayesian statistics is used to obtain more reliable results.To verify and confirm the efficiency of the proposed method using surrogate models and Bayesian statistics for stochastic structural optimization,two numerical examples are optimized,and the optimization of a hub sleeve is demonstrated as a practical problem.

  9. A new model test in high energy physics in frequentist and Bayesian statistical formalisms

    CERN Document Server

    Kamenshchikov, Andrey

    2016-01-01

    A problem of a new physical model test given observed experimental data is a typical one for modern experiments of high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations and both the approaches are utilized in order to test a new model. The results are juxtaposed, what demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  10. A new model test in high energy physics in frequentist and Bayesian statistical formalisms

    Science.gov (United States)

    Kamenshchikov, A.

    2017-01-01

    A problem of a new physical model test given observed experimental data is a typical one for modern experiments of high energy physics (HEP). A solution of the problem may be provided with two alternative statistical formalisms, namely frequentist and Bayesian, which are widely spread in contemporary HEP searches. A characteristic experimental situation is modeled from general considerations and both the approaches are utilized in order to test a new model. The results are juxtaposed, what demonstrates their consistency in this work. An effect of a systematic uncertainty treatment in the statistical analysis is also considered.

  11. Statistical Techniques for Project Control

    CERN Document Server

    Badiru, Adedeji B

    2012-01-01

    A project can be simple or complex. In each case, proven project management processes must be followed. In all cases of project management implementation, control must be exercised in order to assure that project objectives are achieved. Statistical Techniques for Project Control seamlessly integrates qualitative and quantitative tools and techniques for project control. It fills the void that exists in the application of statistical techniques to project control. The book begins by defining the fundamentals of project management then explores how to temper quantitative analysis with qualitati

  12. Technical Topic 3.2.2.d Bayesian and Non-Parametric Statistics: Integration of Neural Networks with Bayesian Networks for Data Fusion and Predictive Modeling

    Science.gov (United States)

    2016-05-31

    Distribution Unlimited UU UU UU UU 31-05-2016 15-Apr-2014 14-Jan-2015 Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics...of Papers published in non peer-reviewed journals: Final Report: Technical Topic 3.2.2.d Bayesian and Non- parametric Statistics: Integration of Neural...Transfer N/A Number of graduating undergraduates who achieved a 3.5 GPA to 4.0 (4.0 max scale ): Number of graduating undergraduates funded by a DoD funded

  13. Control of Complex Systems Using Bayesian Networks and Genetic Algorithm

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    A method based on Bayesian neural networks and genetic algorithm is proposed to control the fermentation process. The relationship between input and output variables is modelled using Bayesian neural network that is trained using hybrid Monte Carlo method. A feedback loop based on genetic algorithm is used to change input variables so that the output variables are as close to the desired target as possible without the loss of confidence level on the prediction that the neural network gives. The proposed procedure is found to reduce the distance between the desired target and measured outputs significantly.

  14. Elements of probability and statistics an introduction to probability with De Finetti’s approach and to Bayesian statistics

    CERN Document Server

    Biagini, Francesca

    2016-01-01

    This book provides an introduction to elementary probability and to Bayesian statistics using de Finetti's subjectivist approach. One of the features of this approach is that it does not require the introduction of sample space – a non-intrinsic concept that makes the treatment of elementary probability unnecessarily complicate – but introduces as fundamental the concept of random numbers directly related to their interpretation in applications. Events become a particular case of random numbers and probability a particular case of expectation when it is applied to events. The subjective evaluation of expectation and of conditional expectation is based on an economic choice of an acceptable bet or penalty. The properties of expectation and conditional expectation are derived by applying a coherence criterion that the evaluation has to follow. The book is suitable for all introductory courses in probability and statistics for students in Mathematics, Informatics, Engineering, and Physics.

  15. Bayesian Statistics at Work: the Troublesome Extraction of the CKM Phase {alpha}

    Energy Technology Data Exchange (ETDEWEB)

    Charles, J. [CPT, Luminy Case 907, F-13288 Marseille Cedex 9 (France); Hoecker, A. [CERN, CH-1211 Geneva 23 (Switzerland); Lacker, H. [TU Dresden, IKTP, D-01062 Dresden (Germany); Le Diberder, F.R. [LAL, CNRS/IN2P3, Universite Paris-Sud 11, Bat. 200, BP 34, F-91898 Orsay Cedex (France); T' Jampens, S. [LAPP, CNRS/IN2P3, Universite de Savoie, 9 Chemin de Bellevue, BP 110, F-74941 Annecy-le-Vieux Cedex (France)

    2007-04-15

    In Bayesian statistics, one's prior beliefs about underlying model parameters are revised with the information content of observed data from which, using Bayes' rule, a posterior belief is obtained. A non-trivial example taken from the isospin analysis of B {yields} PP (P = {pi} or {rho}) decays in heavy-flavor physics is chosen to illustrate the effect of the naive 'objective' choice of flat priors in a multi- dimensional parameter space in presence of mirror solutions. It is demonstrated that the posterior distribution for the parameter of interest, the phase {alpha}, strongly depends on the choice of the parameterization in which the priors are uniform, and on the validity range in which the (un-normalizable) priors are truncated. We prove that the most probable values found by the Bayesian treatment do not coincide with the explicit analytical solutions, in contrast to the frequentist approach. It is also shown in the appendix that the {alpha} {yields} 0 limit cannot be consistently treated in the Bayesian paradigm, because the latter violates the physical symmetries of the problem. (authors)

  16. Bayesian statistical modeling of spatially correlated error structure in atmospheric tracer inverse analysis

    Directory of Open Access Journals (Sweden)

    C. Mukherjee

    2011-01-01

    Full Text Available Inverse modeling applications in atmospheric chemistry are increasingly addressing the challenging statistical issues of data synthesis by adopting refined statistical analysis methods. This paper advances this line of research by addressing several central questions in inverse modeling, focusing specifically on Bayesian statistical computation. Motivated by problems of refining bottom-up estimates of source/sink fluxes of trace gas and aerosols based on increasingly high-resolution satellite retrievals of atmospheric chemical concentrations, we address head-on the need for integrating formal spatial statistical methods of residual error structure in global scale inversion models. We do this using analytically and computationally tractable spatial statistical models, know as conditional autoregressive spatial models, as components of a global inversion framework. We develop Markov chain Monte Carlo methods to explore and fit these spatial structures in an overall statistical framework that simultaneously estimates source fluxes. Additional aspects of the study extend the statistical framework to utilize priors in a more physically realistic manner, and to formally address and deal with missing data in satellite retrievals. We demonstrate the analysis in the context of inferring carbon monoxide (CO sources constrained by satellite retrievals of column CO from the Measurement of Pollution in the Troposphere (MOPITT instrument on the TERRA satellite, paying special attention to evaluating performance of the inverse approach using various statistical diagnostic metrics. This is developed using synthetic data generated to resemble MOPITT data to define a~proof-of-concept and model assessment, and then in analysis of real MOPITT data.

  17. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    CERN Document Server

    Rubin, David; Barbary, Kyle; Boone, Kyle; Chappell, Greta; Currie, Miles; Deustua, Susana; Fagrelius, Parker; Fruchter, Andrew; Hayden, Brian; Lidman, Chris; Nordin, Jakob; Perlmutter, Saul; Saunders, Clare; Sofiatti, Caroline

    2015-01-01

    While recent supernova cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current supernova cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, intrinsic dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real supernova observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was blinded, in that the method was first validated on simulated data, and no analysis changes were made after transiti...

  18. Spatial heterogeneity and risk factors for stunting among children under age five in Ethiopia: A Bayesian geo-statistical model

    Science.gov (United States)

    Hagos, Seifu; Hailemariam, Damen; WoldeHanna, Tasew; Lindtjørn, Bernt

    2017-01-01

    Background Understanding the spatial distribution of stunting and underlying factors operating at meso-scale is of paramount importance for intervention designing and implementations. Yet, little is known about the spatial distribution of stunting and some discrepancies are documented on the relative importance of reported risk factors. Therefore, the present study aims at exploring the spatial distribution of stunting at meso- (district) scale, and evaluates the effect of spatial dependency on the identification of risk factors and their relative contribution to the occurrence of stunting and severe stunting in a rural area of Ethiopia. Methods A community based cross sectional study was conducted to measure the occurrence of stunting and severe stunting among children aged 0–59 months. Additionally, we collected relevant information on anthropometric measures, dietary habits, parent and child-related demographic and socio-economic status. Latitude and longitude of surveyed households were also recorded. Local Anselin Moran's I was calculated to investigate the spatial variation of stunting prevalence and identify potential local pockets (hotspots) of high prevalence. Finally, we employed a Bayesian geo-statistical model, which accounted for spatial dependency structure in the data, to identify potential risk factors for stunting in the study area. Results Overall, the prevalence of stunting and severe stunting in the district was 43.7% [95%CI: 40.9, 46.4] and 21.3% [95%CI: 19.5, 23.3] respectively. We identified statistically significant clusters of high prevalence of stunting (hotspots) in the eastern part of the district and clusters of low prevalence (cold spots) in the western. We found out that the inclusion of spatial structure of the data into the Bayesian model has shown to improve the fit for stunting model. The Bayesian geo-statistical model indicated that the risk of stunting increased as the child’s age increased (OR 4.74; 95% Bayesian credible

  19. Thomas: building Bayesian statistical expert systems to aid in clinical decision making.

    Science.gov (United States)

    Lehmann, H P; Shortliffe, E H

    1991-08-01

    Knowledge-based system for classical statistical analysis must separate the task of analyzing data from that of using the results of the analysis. In contrast, a Bayesian framework for building biostatistical expert system allows for the integration of the data-analytic and decision-making tasks. The architecture of such a framework entails enabling the system (1) to make its recommendations on decision-analytic grounds; (2) to construct statistical models dynamically; (3) to update a statistical model based on the user's prior beliefs and on data from, the methodological concerns evinced by, the study. This architecture permits the knowledge engineer to represent a variety of types of statistical and domain knowledge. Construction of such systems requires that the knowledge engineer reinterpret traditional statistical concerns, such as by replacing the notion of statistical significance with that of a pragmatic clinical threshold. The clinical user of such a system can interact with the system at a semantic level appropriate to her fund of methodological knowledge, rather than at the level of statistical details. We demonstrate these issues with a prototype system called THOMAS which helps a physician decision maker interpret the results of a published randomized clinical trial.

  20. Predictive data-derived Bayesian statistic-transport model and simulator of sunken oil mass

    Science.gov (United States)

    Echavarria Gregory, Maria Angelica

    Sunken oil is difficult to locate because remote sensing techniques cannot as yet provide views of sunken oil over large areas. Moreover, the oil may re-suspend and sink with changes in salinity, sediment load, and temperature, making deterministic fate models difficult to deploy and calibrate when even the presence of sunken oil is difficult to assess. For these reasons, together with the expense of field data collection, there is a need for a statistical technique integrating limited data collection with stochastic transport modeling. Predictive Bayesian modeling techniques have been developed and demonstrated for exploiting limited information for decision support in many other applications. These techniques brought to a multi-modal Lagrangian modeling framework, representing a near-real time approach to locating and tracking sunken oil driven by intrinsic physical properties of field data collected following a spill after oil has begun collecting on a relatively flat bay bottom. Methods include (1) development of the conceptual predictive Bayesian model and multi-modal Gaussian computational approach based on theory and literature review; (2) development of an object-oriented programming and combinatorial structure capable of managing data, integration and computation over an uncertain and highly dimensional parameter space; (3) creating a new bi-dimensional approach of the method of images to account for curved shoreline boundaries; (4) confirmation of model capability for locating sunken oil patches using available (partial) real field data and capability for temporal projections near curved boundaries using simulated field data; and (5) development of a stand-alone open-source computer application with graphical user interface capable of calibrating instantaneous oil spill scenarios, obtaining sets maps of relative probability profiles at different prediction times and user-selected geographic areas and resolution, and capable of performing post

  1. A Bayesian Formulation of Behavioral Control

    Science.gov (United States)

    Huys, Quentin J. M.; Dayan, Peter

    2009-01-01

    Helplessness, a belief that the world is not subject to behavioral control, has long been central to our understanding of depression, and has influenced cognitive theories, animal models and behavioral treatments. However, despite its importance, there is no fully accepted definition of helplessness or behavioral control in psychology or…

  2. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    Science.gov (United States)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  3. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    Science.gov (United States)

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  4. Statistical analysis using the Bayesian nonparametric method for irradiation embrittlement of reactor pressure vessels

    Science.gov (United States)

    Takamizawa, Hisashi; Itoh, Hiroto; Nishiyama, Yutaka

    2016-10-01

    In order to understand neutron irradiation embrittlement in high fluence regions, statistical analysis using the Bayesian nonparametric (BNP) method was performed for the Japanese surveillance and material test reactor irradiation database. The BNP method is essentially expressed as an infinite summation of normal distributions, with input data being subdivided into clusters with identical statistical parameters, such as mean and standard deviation, for each cluster to estimate shifts in ductile-to-brittle transition temperature (DBTT). The clusters typically depend on chemical compositions, irradiation conditions, and the irradiation embrittlement. Specific variables contributing to the irradiation embrittlement include the content of Cu, Ni, P, Si, and Mn in the pressure vessel steels, neutron flux, neutron fluence, and irradiation temperatures. It was found that the measured shifts of DBTT correlated well with the calculated ones. Data associated with the same materials were subdivided into the same clusters even if neutron fluences were increased.

  5. Fast SAR Image Change Detection Using Bayesian Approach Based Difference Image and Modified Statistical Region Merging

    Directory of Open Access Journals (Sweden)

    Han Zhang

    2014-01-01

    Full Text Available A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI generation process. The new DI performs much better than the familiar log ratio (LR DI as well as the cumulant based Kullback-Leibler divergence (CKLD DI. The statistical region merging (SRM approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency.

  6. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    Science.gov (United States)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described

  7. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  8. Quality assurance and statistical control

    DEFF Research Database (Denmark)

    Heydorn, K.

    1991-01-01

    In scientific research laboratories it is rarely possible to use quality assurance schemes, developed for large-scale analysis. Instead methods have been developed to control the quality of modest numbers of analytical results by relying on statistical control: Analysis of precision serves...... serves to detect analytical bias by comparing results obtained by two different analytical methods, each relying on a different detection principle and therefore exhibiting different influence from matrix elements; only 5-10 sets of results are required to establish whether a regression line passes...

  9. Bayesian versus frequentist statistical inference for investigating a one-off cancer cluster reported to a health department

    Directory of Open Access Journals (Sweden)

    Wills Rachael A

    2009-05-01

    Full Text Available Abstract Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones, rather than objective reality. Bayesian analysis is (arguably a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.

  10. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J.; Austin, R. Marshall

    2016-01-01

    Background: Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. Aim: The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. Materials and Methods: This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan–Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. Results: The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Conclusion: Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches. PMID:28163973

  11. How to construct the optimal Bayesian measurement in quantum statistical decision theory

    Science.gov (United States)

    Tanaka, Fuyuhiko

    Recently, much more attention has been paid to the study aiming at the application of fundamental properties in quantum theory to information processing and technology. In particular, modern statistical methods have been recognized in quantum state tomography (QST), where we have to estimate a density matrix (positive semidefinite matrix of trace one) representing a quantum system from finite data collected in a certain experiment. When the dimension of the density matrix gets large (from a few hundred to millions), it gets a nontrivial problem. While a specific measurement is often given and fixed in QST, we are also able to choose a measurement itself according to the purpose of QST by using qunatum statistical decision theory. Here we propose a practical method to find the best projective measurement in the Bayesian sense. We assume that a prior distribution (e.g., the uniform distribution) and a convex loss function (e.g., the squared error) are given. In many quantum experiments, these assumptions are not so restrictive. We show that the best projective measurement and the best statistical inference based on the measurement outcome exist and that they are obtained explicitly by using the Monte Carlo optimization. The Grant-in-Aid for Scientific Research (B) (No. 26280005).

  12. Non-parametric Bayesian mixture of sparse regressions with application towards feature selection for statistical downscaling

    Directory of Open Access Journals (Sweden)

    D. Das

    2014-04-01

    Full Text Available Climate projections simulated by Global Climate Models (GCM are often used for assessing the impacts of climate change. However, the relatively coarse resolutions of GCM outputs often precludes their application towards accurately assessing the effects of climate change on finer regional scale phenomena. Downscaling of climate variables from coarser to finer regional scales using statistical methods are often performed for regional climate projections. Statistical downscaling (SD is based on the understanding that the regional climate is influenced by two factors – the large scale climatic state and the regional or local features. A transfer function approach of SD involves learning a regression model which relates these features (predictors to a climatic variable of interest (predictand based on the past observations. However, often a single regression model is not sufficient to describe complex dynamic relationships between the predictors and predictand. We focus on the covariate selection part of the transfer function approach and propose a nonparametric Bayesian mixture of sparse regression models based on Dirichlet Process (DP, for simultaneous clustering and discovery of covariates within the clusters while automatically finding the number of clusters. Sparse linear models are parsimonious and hence relatively more generalizable than non-sparse alternatives, and lends to domain relevant interpretation. Applications to synthetic data demonstrate the value of the new approach and preliminary results related to feature selection for statistical downscaling shows our method can lead to new insights.

  13. Bayesian Software Health Management for Aircraft Guidance, Navigation, and Control

    Science.gov (United States)

    Schumann, Johann; Mbaya, Timmy; Menghoel, Ole

    2011-01-01

    Modern aircraft, both piloted fly-by-wire commercial aircraft as well as UAVs, more and more depend on highly complex safety critical software systems with many sensors and computer-controlled actuators. Despite careful design and V&V of the software, severe incidents have happened due to malfunctioning software. In this paper, we discuss the use of Bayesian networks (BNs) to monitor the health of the on-board software and sensor system, and to perform advanced on-board diagnostic reasoning. We will focus on the approach to develop reliable and robust health models for the combined software and sensor systems.

  14. To be certain about the uncertainty: Bayesian statistics for (13) C metabolic flux analysis.

    Science.gov (United States)

    Theorell, Axel; Leweke, Samuel; Wiechert, Wolfgang; Nöh, Katharina

    2017-07-11

    (13) C Metabolic Fluxes Analysis ((13) C MFA) remains to be the most powerful approach to determine intracellular metabolic reaction rates. Decisions on strain engineering and experimentation heavily rely upon the certainty with which these fluxes are estimated. For uncertainty quantification, the vast majority of (13) C MFA studies relies on confidence intervals from the paradigm of Frequentist statistics. However, it is well known that the confidence intervals for a given experimental outcome are not uniquely defined. As a result, confidence intervals produced by different methods can be different, but nevertheless equally valid. This is of high relevance to (13) C MFA, since practitioners regularly use three different approximate approaches for calculating confidence intervals. By means of a computational study with a realistic model of the central carbon metabolism of E. coli, we provide strong evidence that confidence intervals used in the field depend strongly on the technique with which they were calculated and, thus, their use leads to misinterpretation of the flux uncertainty. In order to provide a better alternative to confidence intervals in (13) C MFA, we demonstrate that credible intervals from the paradigm of Bayesian statistics give more reliable flux uncertainty quantifications which can be readily computed with high accuracy using Markov chain Monte Carlo. In addition, the widely applied chi-square test, as a means of testing whether the model reproduces the data, is examined closer. © 2017 Wiley Periodicals, Inc.

  15. Ionosonde measurements in Bayesian statistical ionospheric tomography with incoherent scatter radar validation

    Directory of Open Access Journals (Sweden)

    J. Norberg

    2015-09-01

    Full Text Available We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters, and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient and statistically clear inversion algorithm for tomography. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT UHF incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the height distribution of electron density, and outperforms the alternative prior information sources. With an ionosonde at continuous disposal, the presented method enhances stand-alone near real-time ionospheric tomography for the given conditions significantly.

  16. Bayesian Analysis of Two Stellar Populations in Galactic Globular Clusters. I. Statistical and Computational Methods

    Science.gov (United States)

    Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.

    2016-07-01

    We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).

  17. Bayesian Reliability-Growth Analysis for Statistical of Diverse Population Based on Non-homogeneous Poisson Process

    Institute of Scientific and Technical Information of China (English)

    MING Zhimao; TAO Junyong; ZHANG Yunan; YI Xiaoshan; CHEN Xun

    2009-01-01

    New armament systems are subjected to the method for dealing with multi-stage system reliability-growth statistical problems of diverse population in order to improve reliability before starting mass production. Aiming at the test process which is high expense and small sample-size in the development of complex system, the specific methods are studied on how to process the statistical information of Bayesian reliability growth regarding diverse populations. Firstly, according to the characteristics of reliability growth during product development, the Bayesian method is used to integrate the testing information of multi-stage and the order relations of distribution parameters. And then a Gamma-Beta prior distribution is proposed based on non-homogeneous Poisson process(NHPP) corresponding to the reliability growth process. The posterior distribution of reliability parameters is obtained regarding different stages of product, and the reliability parameters are evaluated based on the posterior distribution. Finally, Bayesian approach proposed in this paper for multi-stage reliability growth test is applied to the test process which is small sample-size in the astronautics filed. The results of a numerical example show that the presented model can make use of the diverse information synthetically, and pave the way for the application of the Bayesian model for multi-stage reliability growth test evaluation with small sample-size. The method is useful for evaluating multi-stage system reliability and making reliability growth plan rationally.

  18. Bayesian Atmospheric Radiative Transfer (BART): Model, Statistics Driver, and Application to HD 209458b

    Science.gov (United States)

    Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.

    2014-11-01

    Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  19. Integration of association statistics over genomic regions using Bayesian adaptive regression splines

    Directory of Open Access Journals (Sweden)

    Zhang Xiaohua

    2003-11-01

    Full Text Available Abstract In the search for genetic determinants of complex disease, two approaches to association analysis are most often employed, testing single loci or testing a small group of loci jointly via haplotypes for their relationship to disease status. It is still debatable which of these approaches is more favourable, and under what conditions. The former has the advantage of simplicity but suffers severely when alleles at the tested loci are not in linkage disequilibrium (LD with liability alleles; the latter should capture more of the signal encoded in LD, but is far from simple. The complexity of haplotype analysis could be especially troublesome for association scans over large genomic regions, which, in fact, is becoming the standard design. For these reasons, the authors have been evaluating statistical methods that bridge the gap between single-locus and haplotype-based tests. In this article, they present one such method, which uses non-parametric regression techniques embodied by Bayesian adaptive regression splines (BARS. For a set of markers falling within a common genomic region and a corresponding set of single-locus association statistics, the BARS procedure integrates these results into a single test by examining the class of smooth curves consistent with the data. The non-parametric BARS procedure generally finds no signal when no liability allele exists in the tested region (ie it achieves the specified size of the test and it is sensitive enough to pick up signals when a liability allele is present. The BARS procedure provides a robust and potentially powerful alternative to classical tests of association, diminishes the multiple testing problem inherent in those tests and can be applied to a wide range of data types, including genotype frequencies estimated from pooled samples.

  20. Models for Prediction, Explanation and Control: Recursive Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Lorenzo Casini

    2011-01-01

    Full Text Available The Recursive Bayesian Net (RBN formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis.

  1. Frontiers in statistical quality control 11

    CERN Document Server

    Schmid, Wolfgang

    2015-01-01

    The main focus of this edited volume is on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control, while acceptance sampling and design of experiments are also treated to a lesser extent. The book is organized into four thematic parts, with Part I addressing statistical process control. Part II is devoted to acceptance sampling. Part III covers the design of experiments, while Part IV discusses related fields. The twenty-three papers in this volume stem from The 11th International Workshop on Intelligent Statistical Quality Control, which was held in Sydney, Australia from August 20 to August 23, 2013. The event was hosted by Professor Ross Sparks, CSIRO Mathematics, Informatics and Statistics, North Ryde, Australia and was jointly organized by Professors S. Knoth, W. Schmid and Ross Sparks. The papers presented here were carefully selected and reviewed by the scientifi...

  2. Frontiers in statistical quality control

    CERN Document Server

    Wilrich, Peter-Theodor

    1997-01-01

    Like the preceding volumes, and met with a lively response, the present volume is collecting contributions stressed on methodology or successful industrial applications. The papers are classified under four main headings: sampling inspection, process quality control, data analysis and process capability studies and finally experimental design.

  3. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    Science.gov (United States)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  4. An overview of component qualification using Bayesian statistics and energy methods.

    Energy Technology Data Exchange (ETDEWEB)

    Dohner, Jeffrey Lynn

    2011-09-01

    The below overview is designed to give the reader a limited understanding of Bayesian and Maximum Likelihood (MLE) estimation; a basic understanding of some of the mathematical tools to evaluate the quality of an estimation; an introduction to energy methods and a limited discussion of damage potential. This discussion then goes on to presented a limited presentation as to how energy methods and Bayesian estimation are used together to qualify components. Example problems with solutions have been supplied as a learning aid. Bold letters are used to represent random variables. Un-bolded letter represent deterministic values. A concluding section presents a discussion of attributes and concerns.

  5. Complexity control in statistical learning

    Indian Academy of Sciences (India)

    Sameer M Jalnapurkar

    2006-04-01

    We consider the problem of determining a model for a given system on the basis of experimental data. The amount of data available is limited and, further, may be corrupted by noise. In this situation, it is important to control the complexity of the class of models from which we are to choose our model. In this paper, we first give a simplified overview of the principal features of learning theory. Then we describe how the method of regularization is used to control complexity in learning. We discuss two examples of regularization, one in which the function space used is finite dimensional, and another in which it is a reproducing kernel Hilbert space. Our exposition follows the formulation of Cucker and Smale. We give a new method of bounding the sample error in the regularization scenario, which avoids some difficulties in the derivation given by Cucker and Smale.

  6. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  7. Priors, Posterior Odds and Lagrange Multiplier Statistics in Bayesian Analyses of Cointegration

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); R. Paap (Richard)

    1996-01-01

    textabstractUsing the standard linear model as a base, a unified theory of Bayesian Analyses of Cointegration Models is constructed. This is achieved by defining (natural conjugate) priors in the linear model and using the implied priors for the cointegration model. Using these priors, posterior res

  8. Optimum Inductive Methods. A study in Inductive Probability, Bayesian Statistics, and Verisimilitude.

    NARCIS (Netherlands)

    Festa, Roberto

    1992-01-01

    According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important

  9. Robust control charts in statistical process control

    NARCIS (Netherlands)

    Nazir, H.Z.

    2014-01-01

    The presence of outliers and contaminations in the output of the process highly affects the performance of the design structures of commonly used control charts and hence makes them of less practical use. One of the solutions to deal with this problem is to use control charts which are robust agains

  10. Statistical Analysis of Firearms/Toolmarks Interpretation of Cartridge Case Evidence Using IBIS and Bayesian Networks

    Science.gov (United States)

    2015-10-24

    Cases , KB Morris, E Law, R Jefferys, & E Fabyanic, 67th AAFS Meeting, Orlando , FL, February 2015 Poster: Using likelihood ratios for source attribution...of Glock™ model 21 fired cartridge cases , C Hefner, & KB Morris, 67th AAFS Meeting, Orlando , FL, February 2015. (c) Presentations Number of...and known cartridge cases ) to assess the performance of the Bayesian networks created during the study . In all cases the sets were submitted in a

  11. Bayesian Statistical Inference in Ion-Channel Models with Exact Missed Event Correction.

    Science.gov (United States)

    Epstein, Michael; Calderhead, Ben; Girolami, Mark A; Sivilotti, Lucia G

    2016-07-26

    The stochastic behavior of single ion channels is most often described as an aggregated continuous-time Markov process with discrete states. For ligand-gated channels each state can represent a different conformation of the channel protein or a different number of bound ligands. Single-channel recordings show only whether the channel is open or shut: states of equal conductance are aggregated, so transitions between them have to be inferred indirectly. The requirement to filter noise from the raw signal further complicates the modeling process, as it limits the time resolution of the data. The consequence of the reduced bandwidth is that openings or shuttings that are shorter than the resolution cannot be observed; these are known as missed events. Postulated models fitted using filtered data must therefore explicitly account for missed events to avoid bias in the estimation of rate parameters and therefore assess parameter identifiability accurately. In this article, we present the first, to our knowledge, Bayesian modeling of ion-channels with exact missed events correction. Bayesian analysis represents uncertain knowledge of the true value of model parameters by considering these parameters as random variables. This allows us to gain a full appreciation of parameter identifiability and uncertainty when estimating values for model parameters. However, Bayesian inference is particularly challenging in this context as the correction for missed events increases the computational complexity of the model likelihood. Nonetheless, we successfully implemented a two-step Markov chain Monte Carlo method that we called "BICME", which performs Bayesian inference in models of realistic complexity. The method is demonstrated on synthetic and real single-channel data from muscle nicotinic acetylcholine channels. We show that parameter uncertainty can be characterized more accurately than with maximum-likelihood methods. Our code for performing inference in these ion channel

  12. Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods

    Science.gov (United States)

    Davis, A. D.

    2015-12-01

    The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity

  13. Quantifying veterinarians' beliefs on disease control and exploring the effect of new evidence: a Bayesian approach.

    Science.gov (United States)

    Higgins, H M; Huxley, J N; Wapenaar, W; Green, M J

    2014-01-01

    The clinical beliefs (expectations and demands) of veterinarians regarding herd-level strategies to control mastitis, lameness, and Johne's disease were quantified in a numerical format; 94 veterinarians working in England (UK) were randomly selected and, during interviews, a statistical technique called probabilistic elicitation was used to capture their clinical expectations as probability distributions. The results revealed that markedly different clinical expectations existed for all 3 diseases, and many pairs of veterinarians had expectations with nonoverlapping 95% Bayesian credible intervals. For example, for a 3-yr lameness intervention, the most pessimistic veterinarian was centered at an 11% population mean reduction in lameness prevalence (95% credible interval: 0-21%); the most enthusiastic veterinarian was centered at a 58% reduction (95% credible interval: 38-78%). This suggests that a major change in beliefs would be required to achieve clinical agreement. Veterinarians' clinical expectations were used as priors in Bayesian models where they were combined with synthetic data (from randomized clinical trials of different sizes) to explore the effect of new evidence on current clinical opinion. The mathematical models make predictions based on the assumption that veterinarians will update their beliefs logically. For example, for the lameness intervention, a 200-farm clinical trial that estimated a 30% mean reduction in lameness prevalence was predicted to be reasonably convincing to the most pessimist veterinarian; that is, in light of this data, they were predicted to believe there would be a 0.92 probability of exceeding the median clinical demand of this sample of veterinarians, which was a 20% mean reduction in lameness. Currently, controversy exists over the extent to which veterinarians update their beliefs logically, and further research on this is needed. This study has demonstrated that probabilistic elicitation and a Bayesian framework are

  14. Quantifying veterinarians’ beliefs on disease control and exploring the effect of new evidence: A Bayesian approach

    Science.gov (United States)

    Higgins, H. M.; Huxley, J. N.; Wapenaar, W.; Green, M.J.

    2017-01-01

    The clinical beliefs (expectations and demands) of veterinarians regarding herd-level strategies to control mastitis, lameness and Johne’s disease were quantified in a numerical format; 94 veterinarians working in England (UK) were randomly selected and during interviews, a statistical technique called ‘probabilistic elicitation’ was used to capture their clinical expectations as probability distributions. The results revealed that markedly different clinical expectations existed for all 3 diseases, and many pairs of veterinarians had expectations with non-overlapping 95% Bayesian credible intervals; for example, for a 3 yr lameness intervention, the most pessimistic veterinarian was centred at an 11% population mean reduction in lameness prevalence (95% credible interval: 0-21%); the most enthusiastic veterinarian was centred at a 58% reduction (95% credible interval: 38-78%). This suggests that a major change in beliefs would be required to achieve clinical agreement. The veterinarians’ clinical expectations were used as priors in Bayesian models where they were combined with synthetic data (from randomized clinical trials of different sizes) in order to explore the impact of new evidence on current clinical opinion. The mathematical models make predictions based on the assumption that veterinarians will update their beliefs logically. For example, for the lameness intervention, a 200 farm clinical trial that estimated a 30% mean reduction in lameness prevalence was predicted to be reasonably convincing to the most pessimist veterinarian; i.e. in light of this data, they were predicted to believe there would be a 0.92 probability of exceeding the median clinical demand of this sample of veterinarians, which was a 20% mean reduction in lameness. Currently controversy exists over the extent to which veterinarians update their beliefs logically, and further research on this is needed. This study has demonstrated that probabilistic elicitation and a Bayesian

  15. Using Bayesian statistics for modeling PTSD through Latent Growth Mixture Modeling: implementation and discussion

    Directory of Open Access Journals (Sweden)

    Sarah Depaoli

    2015-03-01

    Full Text Available Background: After traumatic events, such as disaster, war trauma, and injuries including burns (which is the focus here, the risk to develop posttraumatic stress disorder (PTSD is approximately 10% (Breslau & Davis, 1992. Latent Growth Mixture Modeling can be used to classify individuals into distinct groups exhibiting different patterns of PTSD (Galatzer-Levy, 2015. Currently, empirical evidence points to four distinct trajectories of PTSD patterns in those who have experienced burn trauma. These trajectories are labeled as: resilient, recovery, chronic, and delayed onset trajectories (e.g., Bonanno, 2004; Bonanno, Brewin, Kaniasty, & Greca, 2010; Maercker, Gäbler, O'Neil, Schützwohl, & Müller, 2013; Pietrzak et al., 2013. The delayed onset trajectory affects only a small group of individuals, that is, about 4–5% (O'Donnell, Elliott, Lau, & Creamer, 2007. In addition to its low frequency, the later onset of this trajectory may contribute to the fact that these individuals can be easily overlooked by professionals. In this special symposium on Estimating PTSD trajectories (Van de Schoot, 2015a, we illustrate how to properly identify this small group of individuals through the Bayesian estimation framework using previous knowledge through priors (see, e.g., Depaoli & Boyajian, 2014; Van de Schoot, Broere, Perryck, Zondervan-Zwijnenburg, & Van Loey, 2015. Method: We used latent growth mixture modeling (LGMM (Van de Schoot, 2015b to estimate PTSD trajectories across 4 years that followed a traumatic burn. We demonstrate and compare results from traditional (maximum likelihood and Bayesian estimation using priors (see, Depaoli, 2012, 2013. Further, we discuss where priors come from and how to define them in the estimation process. Results: We demonstrate that only the Bayesian approach results in the desired theory-driven solution of PTSD trajectories. Since the priors are chosen subjectively, we also present a sensitivity analysis of the

  16. Gaussian-log-Gaussian wavelet trees, frequentist and Bayesian inference, and statistical signal processing applications

    DEFF Research Database (Denmark)

    Møller, Jesper; Jacobsen, Robert Dahl

    We introduce a promising alternative to the usual hidden Markov tree model for Gaussian wavelet coefficients, where their variances are specified by the hidden states and take values in a finite set. In our new model, the hidden states have a similar dependence structure but they are jointly...... Gaussian, and the wavelet coefficients have log-variances equal to the hidden states. We argue why this provides a flexible model where frequentist and Bayesian inference procedures become tractable for estimation of parameters and hidden states. Our methodology is illustrated for denoising and edge...

  17. Affine Invariant, Model-Based Object Recognition Using Robust Metrics and Bayesian Statistics

    CERN Document Server

    Zografos, Vasileios; 10.1007/11559573_51

    2010-01-01

    We revisit the problem of model-based object recognition for intensity images and attempt to address some of the shortcomings of existing Bayesian methods, such as unsuitable priors and the treatment of residuals with a non-robust error norm. We do so by using a refor- mulation of the Huber metric and carefully chosen prior distributions. Our proposed method is invariant to 2-dimensional affine transforma- tions and, because it is relatively easy to train and use, it is suited for general object matching problems.

  18. Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics.

    Directory of Open Access Journals (Sweden)

    Korsuk Sirinukunwattana

    Full Text Available Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites.google.com/site/gaussianbhc/

  19. STATISTICAL PROCESS CONTROL IN SERBIAN FOOD PACKAGING

    Directory of Open Access Journals (Sweden)

    Djekic Ilija

    2014-09-01

    Full Text Available This paper gives an overview of the food packaging process in seven food companies in the dairy and confectionery sector. A total of 23 production runs have been analyzed regarding the three packers' rules outlined in the Serbian legislation and process capability tests related to statistical process control. None of the companies had any type of statistical process control in place. Results confirmed that more companies show overweight packaging compared to underfilling. Production runs are more accurate than precise, although in some cases the productions are both inaccurate and imprecise. Education / training of the new generation of food industry workers (both on operational and managerial level with courses in the food area covering elements of quality assurance and statistical process control can help in implementing effective food packaging.

  20. Back to BaySICS: a user-friendly program for Bayesian Statistical Inference from Coalescent Simulations.

    Science.gov (United States)

    Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love

    2014-01-01

    Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.

  1. Bayesian zero-failure reliability modeling and assessment method for multiple numerical control (NC) machine tools

    Institute of Scientific and Technical Information of China (English)

    阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲

    2016-01-01

    A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.

  2. Bayesian integration and non-linear feedback control in a full-body motor task.

    Directory of Open Access Journals (Sweden)

    Ian H Stevenson

    2009-12-01

    Full Text Available A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller. We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.

  3. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  4. Net analyte signal based statistical quality control

    NARCIS (Netherlands)

    Skibsted, E.T.S.; Boelens, H.F.M.; Westerhuis, J.A.; Smilde, A.K.; Broad, N.W.; Rees, D.R.; Witte, D.T.

    2005-01-01

    Net analyte signal statistical quality control (NAS-SQC) is a new methodology to perform multivariate product quality monitoring based on the net analyte signal approach. The main advantage of NAS-SQC is that the systematic variation in the product due to the analyte (or property) of interest is sep

  5. EWMA control charts in statistical process monitoring

    NARCIS (Netherlands)

    Zwetsloot, I.M.

    2016-01-01

    In today’s world, the amount of available data is steadily increasing, and it is often of interest to detect changes in the data. Statistical process monitoring (SPM) provides tools to monitor data streams and to signal changes in the data. One of these tools is the control chart. The topic of this

  6. SYNTHESIS OF THE SECONDARY STRUCTURE OF ALGEBRAIC BAYESIAN NETWORKS: AN INCREMENTAL ALGORITHM AND STATISTICAL ESTIMATION OF ITS COMPLEXITY

    Directory of Open Access Journals (Sweden)

    M. A. Zotov

    2016-01-01

    Full Text Available An improved algorithm for the synthesis of the secondary structure of algebraic Bayesian networks represented by a minimal join graph is proposed in the paper. The algorithm differs from the previously offered one so that it relies on the incremental principle, uses specially selected edges and, finally, eliminates redundant edges by a greedy algorithm. The correct operation of the incremental algorithm is mathematically proved. Comparison of the computational complexity of the new (incremental algorithm implementation and two well-known (greedy and direct is made by means of statistical estimates of complexity, based on the sample values of the runtime ratio of software implementations of two compared algorithms. Theoretical complexity estimates of the greedy and direct algorithms have been obtained earlier, but are not suitable for comparative analysis, as they are based on the hidden characteristics of the secondary structure, which can be calculated only when it is built. To minimize the influence of random factors at calculating the ratio average program runtime is used obtained by N launches on the same set of workloads. The sample values of ratio is formed for M sets of equal power K. According to the sample values the median is calculated, as well as the other statistics that characterize the spread: borders of the 97% confidence interval along with the first and the third quartiles. Sets of loads are stochastically generated according to the specified parameters using the algorithm described in the paper. The stochastic algorithms generating a set of loads with given power, as well as collecting the statistical data and calculating of statistical estimates of the ratio of forward and greedy algorithm to the incremental algorithm runtimes is described in the paper. A series of experiments is carried out in which N is changed in the range 1, 2 ... 9, 10, 26, 42 ... 170.They have showed that the incremental algorithm speed exceeds the

  7. Bayesian Statistics and Parameter Constraints on the Generalized Chaplygin Gas Model using SNe Ia Data

    CERN Document Server

    Colistete, R C; Goncalves, S V B

    2004-01-01

    The type Ia supernovae (SNe Ia) observational data are used to estimate the parameters of a cosmological model with cold dark matter and the generalized Chaplygin gas model (GCGM). The GCGM depends essentially on five parameters: the Hubble constant, the parameter $\\bar{A}$ related to the velocity of the sound, the equation of state parameter $\\alpha$, the curvature of the Universe and the fraction density of the generalized Chaplygin gas (or the cold dark matter). The parameter $\\alpha$ is allowed to take negative values and to be greater than 1. The Bayesian parameter estimation yields $\\alpha = - 0.86^{+6.01}_{-0.15}$, $H_0 = 62.0^{+1.32}_{-1.42} km/Mpc.s$, $\\Omega _{k0}=-1.26_{-1.42}^{+1.32}$, $\\Omega_{m0} = 0.00^{+0.86}_{-0.00}$, $\\Omega_{c0} = 1.39^{+1.21}_{-1.25}$, $\\bar A =1.00^{+0.00}_{-0.39}$, $t_0 = 15.3^{+4.2}_{-3.2}$ and $q_0 = -0.80^{+0.86}_{-0.62}$, where $t_0$ is the age of the Universe and $q_0$ is the value of the deceleration parameter today. Our results indicate that a Universe completely ...

  8. Radiative Transfer meets Bayesian statistics: where does your Galaxy's [CII] come from?

    CERN Document Server

    Accurso, Gioacchino; Bisbas, Thomas G; Viti, Serena

    2016-01-01

    The [CII] 158$\\mu$m emission line can arise in all phases of the ISM, therefore being able to disentangle the different contributions is an important yet unresolved problem when undertaking galaxy-wide, integrated [CII] observations. We present a new multi-phase 3D radiative transfer interface that couples Starburst99, a stellar spectrophotometric code, with the photoionisation and astrochemistry codes Mocassin and 3D-PDR. We model entire star forming regions, including the ionised, atomic and molecular phases of the ISM, and apply a Bayesian inference methodology to parametrise how the fraction of the [CII] emission originating from molecular regions, $f_{[CII],mol}$, varies as a function of typical integrated properties of galaxies in the local Universe. The main parameters responsible for the variations of $f_{[CII],mol}$ are specific star formation rate (sSFR), gas phase metallicity, HII region electron number density ($n_e$), and dust mass fraction. For example, $f_{[CII],mol}$ can increase from 60% to 8...

  9. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    Science.gov (United States)

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  10. Variational Bayesian labeled multi-Bernoulli filter with unknown sensor noise statistics

    Directory of Open Access Journals (Sweden)

    Qiu Hao

    2016-10-01

    Full Text Available It is difficult to build accurate model for measurement noise covariance in complex backgrounds. For the scenarios of unknown sensor noise variances, an adaptive multi-target tracking algorithm based on labeled random finite set and variational Bayesian (VB approximation is proposed. The variational approximation technique is introduced to the labeled multi-Bernoulli (LMB filter to jointly estimate the states of targets and sensor noise variances. Simulation results show that the proposed method can give unbiased estimation of cardinality and has better performance than the VB probability hypothesis density (VB-PHD filter and the VB cardinality balanced multi-target multi-Bernoulli (VB-CBMeMBer filter in harsh situations. The simulations also confirm the robustness of the proposed method against the time-varying noise variances. The computational complexity of proposed method is higher than the VB-PHD and VB-CBMeMBer in extreme cases, while the mean execution times of the three methods are close when targets are well separated.

  11. Statistical flaw characterization through Bayesian shape inversion from scattered wave observations

    Science.gov (United States)

    McMahan, Jerry A.; Criner, Amanda K.

    2016-02-01

    A method is discussed to characterize the shape of a flaw from noisy far-field measurements of a scattered wave. The scattering model employed is a two-dimensional Helmholtz equation which quantifies scattering due to interrogating signals from various physical phenomena such as acoustics or electromagnetics. The well-known inherent ill-posedness of the inverse scattering problem is addressed via Bayesian regularization. The method is loosely related to the approach described in [1] which uses the framework of [2] to prove the well-posedness of the infinite-dimensional problem and derive estimates of the error for a particular discretization approach. The method computes the posterior probability density for the flaw shape from the scattered field observations, taking into account prior assumptions which are used to describe any a priori knowledge of the flaw. We describe the computational approach to the forward problem as well as the Markov chain Monte Carlo (MCMC) based approach to approximating the posterior. We present simulation results for some hypothetical flaw shapes with varying levels of observation error and arrangement of observation points. The results show how the posterior probability density can be used to visualize the shape of the flaw taking into account the quantitative confidence in the quality of the estimation and how various arrangements of the measurements and interrogating signals affect the estimation

  12. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    Science.gov (United States)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component

  13. Radiative transfer meets Bayesian statistics: where does a galaxy's [C II] emission come from?

    Science.gov (United States)

    Accurso, G.; Saintonge, A.; Bisbas, T. G.; Viti, S.

    2017-01-01

    The [C II] 158 μm emission line can arise in all phases of the interstellar medium (ISM), therefore being able to disentangle the different contributions is an important yet unresolved problem when undertaking galaxy-wide, integrated [C II] observations. We present a new multiphase 3D radiative transfer interface that couples STARBURST99, a stellar spectrophotometric code, with the photoionization and astrochemistry codes MOCASSIN and 3D-PDR. We model entire star-forming regions, including the ionized, atomic, and molecular phases of the ISM, and apply a Bayesian inference methodology to parametrize how the fraction of the [C II] emission originating from molecular regions, f_{[C II],mol}, varies as a function of typical integrated properties of galaxies in the local Universe. The main parameters responsible for the variations of f_{[C II],mol} are specific star formation rate (SSFR), gas phase metallicity, H II region electron number density (ne), and dust mass fraction. For example, f_{[C II],mol} can increase from 60 to 80 per cent when either ne increases from 101.5 to 102.5 cm-3, or SSFR decreases from 10-9.6 to 10-10.6 yr-1. Our model predicts for the Milky Way that f_{[C II],mol} = 75.8 ± 5.9 per cent, in agreement with the measured value of 75 per cent. When applying the new prescription to a complete sample of galaxies from the Herschel Reference Survey, we find that anywhere from 60 to 80 per cent of the total integrated [C II] emission arises from molecular regions.

  14. A Dynamic Bayesian Network Model for the Production and Inventory Control

    Science.gov (United States)

    Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol

    In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.

  15. Statistical quality control a loss minimization approach

    CERN Document Server

    Trietsch, Dan

    1999-01-01

    While many books on quality espouse the Taguchi loss function, they do not examine its impact on statistical quality control (SQC). But using the Taguchi loss function sheds new light on questions relating to SQC and calls for some changes. This book covers SQC in a way that conforms with the need to minimize loss. Subjects often not covered elsewhere include: (i) measurements, (ii) determining how many points to sample to obtain reliable control charts (for which purpose a new graphic tool, diffidence charts, is introduced), (iii) the connection between process capability and tolerances, (iv)

  16. Data Mining Foundations and Intelligent Paradigms Volume 2 Statistical, Bayesian, Time Series and other Theoretical Aspects

    CERN Document Server

    Jain, Lakhmi

    2012-01-01

    Data mining is one of the most rapidly growing research areas in computer science and statistics. In Volume 2 of this three volume series, we have brought together contributions from some of the most prestigious researchers in theoretical data mining. Each of the chapters is self contained. Statisticians and applied scientists/ engineers will find this volume valuable. Additionally, it provides a sourcebook for graduate students interested in the current direction of research in data mining.

  17. Statistical analysis of a Bayesian classifier based on the expression of miRNAs

    OpenAIRE

    Ricci, Leonardo; Del Vescovo, Valerio; Cantaloni, Chiara; Grasso, Margherita; Barbareschi, Mattia; Denti, Michela Alessandra

    2015-01-01

    Background During the last decade, many scientific works have concerned the possible use of miRNA levels as diagnostic and prognostic tools for different kinds of cancer. The development of reliable classifiers requires tackling several crucial aspects, some of which have been widely overlooked in the scientific literature: the distribution of the measured miRNA expressions and the statistical uncertainty that affects the parameters that characterize a classifier. In this paper, these topics ...

  18. Statistical quality control through overall vibration analysis

    Science.gov (United States)

    Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos

    2010-05-01

    The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence

  19. The statistical process control methods - SPC

    Directory of Open Access Journals (Sweden)

    Floreková Ľubica

    1998-03-01

    Full Text Available Methods of statistical evaluation of quality – SPC (item 20 of the documentation system of quality control of ISO norm, series 900 of various processes, products and services belong amongst basic qualitative methods that enable us to analyse and compare data pertaining to various quantitative parameters. Also they enable, based on the latter, to propose suitable interventions with the aim of improving these processes, products and services. Theoretical basis and applicatibily of the principles of the: - diagnostics of a cause and effects, - Paret analysis and Lorentz curve, - number distribution and frequency curves of random variable distribution, - Shewhart regulation charts, are presented in the contribution.

  20. Case-control studies of gene-environment interaction: Bayesian design and analysis.

    Science.gov (United States)

    Mukherjee, Bhramar; Ahn, Jaeil; Gruber, Stephen B; Ghosh, Malay; Chatterjee, Nilanjan

    2010-09-01

    With increasing frequency, epidemiologic studies are addressing hypotheses regarding gene-environment interaction. In many well-studied candidate genes and for standard dietary and behavioral epidemiologic exposures, there is often substantial prior information available that may be used to analyze current data as well as for designing a new study. In this article, first, we propose a proper full Bayesian approach for analyzing studies of gene-environment interaction. The Bayesian approach provides a natural way to incorporate uncertainties around the assumption of gene-environment independence, often used in such an analysis. We then consider Bayesian sample size determination criteria for both estimation and hypothesis testing regarding the multiplicative gene-environment interaction parameter. We illustrate our proposed methods using data from a large ongoing case-control study of colorectal cancer investigating the interaction of N-acetyl transferase type 2 (NAT2) with smoking and red meat consumption. We use the existing data to elicit a design prior and show how to use this information in allocating cases and controls in planning a future study that investigates the same interaction parameters. The Bayesian design and analysis strategies are compared with their corresponding frequentist counterparts.

  1. Memory-type control charts in statistical process control

    NARCIS (Netherlands)

    Abbas, N.

    2012-01-01

    Control chart is the most important statistical tool to manage the business processes. It is a graph of measurements on a quality characteristic of the process on the vertical axis plotted against time on the horizontal axis. The graph is completed with control limits that cause variation mark. Once

  2. Analyzing genome-wide association studies with an FDR controlling modification of the Bayesian Information Criterion.

    Science.gov (United States)

    Dolejsi, Erich; Bodenstorfer, Bernhard; Frommlet, Florian

    2014-01-01

    The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods.

  3. Analyzing genome-wide association studies with an FDR controlling modification of the Bayesian Information Criterion.

    Directory of Open Access Journals (Sweden)

    Erich Dolejsi

    Full Text Available The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2 which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods.

  4. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  5. Bayesian ranking of sites for engineering safety improvements: decision parameter, treatability concept, statistical criterion, and spatial dependence.

    Science.gov (United States)

    Miaou, Shaw-Pin; Song, Joon Jin

    2005-07-01

    limitation of using the naïve approach in ranking is illustrated. Second, following the model based approach, the choice of decision parameters and consideration of treatability are discussed. Third, several statistical ranking criteria that have been used in biomedical, health, and other scientific studies are presented from a Bayesian perspective. Their applications in roadway safety are then demonstrated using two data sets: one for individual urban intersections and one for rural two-lane roads at the county level. As part of the demonstration, it is shown how multivariate spatial GLMM can be used to model traffic crashes of several injury severity types simultaneously and how the model can be used within a Bayesian framework to rank sites by crash cost per vehicle-mile traveled (instead of by crash frequency rate). Finally, the significant impact of spatial effects on the overall model goodness-of-fit and site ranking performances are discussed for the two data sets examined. The paper is concluded with a discussion on possible directions in which the study can be extended.

  6. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  7. Exploring neighborhood inequality in female breast cancer incidence in Tehran using Bayesian spatial models and a spatial scan statistic.

    Science.gov (United States)

    Ayubi, Erfan; Mansournia, Mohammad Ali; Motlagh, Ali Ghanbari; Mosavi-Jarrahi, Alireza; Hosseini, Ali; Yazdani, Kamran

    2017-01-01

    The aim of this study was to explore the spatial pattern of female breast cancer (BC) incidence at the neighborhood level in Tehran, Iran. The present study included all registered incident cases of female BC from March 2008 to March 2011. The raw standardized incidence ratio (SIR) of BC for each neighborhood was estimated by comparing observed cases relative to expected cases. The estimated raw SIRs were smoothed by a Besag, York, and Mollie spatial model and the spatial empirical Bayesian method. The purely spatial scan statistic was used to identify spatial clusters. There were 4,175 incident BC cases in the study area from 2008 to 2011, of which 3,080 were successfully geocoded to the neighborhood level. Higher than expected rates of BC were found in neighborhoods located in northern and central Tehran, whereas lower rates appeared in southern areas. The most likely cluster of higher than expected BC incidence involved neighborhoods in districts 3 and 6, with an observed-to-expected ratio of 3.92 (p<0.001), whereas the most likely cluster of lower than expected rates involved neighborhoods in districts 17, 18, and 19, with an observed-to-expected ratio of 0.05 (p<0.001). Neighborhood-level inequality in the incidence of BC exists in Tehran. These findings can serve as a basis for resource allocation and preventive strategies in at-risk areas.

  8. A Framework for the Statistical Analysis of Probability of Mission Success Based on Bayesian Theory

    Science.gov (United States)

    2014-06-01

    times, launch points and other controlled events, it is possible to optimise the result to find the values of the fixed events for which there is the...the function. To reduce the number of iterations in the function, a method was devised to assess whether each product in the summation was equal to...zero based on whether any of the individual probabilities calculated were equal to zero. These products are unnecessary for the summation, and thus

  9. Bayesian Theory

    CERN Document Server

    Bernardo, Jose M

    2000-01-01

    This highly acclaimed text, now available in paperback, provides a thorough account of key concepts and theoretical results, with particular emphasis on viewing statistical inference as a special case of decision theory. Information-theoretic concepts play a central role in the development of the theory, which provides, in particular, a detailed discussion of the problem of specification of so-called prior ignorance . The work is written from the authors s committed Bayesian perspective, but an overview of non-Bayesian theories is also provided, and each chapter contains a wide-ranging critica

  10. The Statistical Dynamics of Nonequilibrium Control

    Science.gov (United States)

    Rotskoff, Grant Murray

    Living systems, even at the scale of single molecules, are constantly adapting to changing environmental conditions. The physical response of a nanoscale system to external gradients or changing thermodynamic conditions can be chaotic, nonlinear, and hence difficult to control or predict. Nevertheless, biology has evolved systems that reliably carry out the cell's vital functions efficiently enough to ensure survival. Moreover, the development of new experimental techniques to monitor and manipulate single biological molecules has provided a natural testbed for theoretical investigations of nonequilibrium dynamics. This work focuses on developing paradigms for both understanding the principles of nonequilibrium dynamics and also for controlling such systems in the presence of thermal fluctuations. Throughout this work, I rely on a perspective based on two central ideas in nonequilibrium statistical mechanics: large deviation theory, which provides a formalism akin to thermodynamics for nonequilibrium systems, and the fluctuation theorems which identify time symmetry breaking with entropy production. I use the tools of large deviation theory to explore concepts like efficiency and optimal coarse-graining in microscopic dynamical systems. The results point to the extreme importance of rare events in nonequilibrium dynamics. In the context of rare dynamical events, I outline a formal approach to predict efficient control protocols for nonequilibrium systems and develop computational tools to solve the resulting high dimensional optimization problems. The final chapters of this work focus on applications to self-assembly dynamics. I show that the yield of desired structures can be enhanced by driving a system away from equilibrium, using analysis inspired by the theory of the hydrophobic effect. Finally, I demonstrate that nanoscale, protein shells can be modeled and controlled to robustly produce monodisperse, nonequilibrium structures strikingly similar to the

  11. Statistical process control for IMRT dosimetric verification.

    Science.gov (United States)

    Breen, Stephen L; Moseley, Douglas J; Zhang, Beibei; Sharpe, Michael B

    2008-10-01

    Patient-specific measurements are typically used to validate the dosimetry of intensity-modulated radiotherapy (IMRT). To evaluate the dosimetric performance over time of our IMRT process, we have used statistical process control (SPC) concepts to analyze the measurements from 330 head and neck (H&N) treatment plans. The objectives of the present work are to: (i) Review the dosimetric measurements of a large series of consecutive head and neck treatment plans to better understand appropriate dosimetric tolerances; (ii) analyze the results with SPC to develop action levels for measured discrepancies; (iii) develop estimates for the number of measurements that are required to describe IMRT dosimetry in the clinical setting; and (iv) evaluate with SPC a new beam model in our planning system. H&N IMRT cases were planned with the PINNACLE treatment planning system versions 6.2b or 7.6c (Philips Medical Systems, Madison, WI) and treated on Varian (Palo Alto, CA) or Elekta (Crawley, UK) linacs. As part of regular quality assurance, plans were recalculated on a 20-cm-diam cylindrical phantom, and ion chamber measurements were made in high-dose volumes (the PTV with highest dose) and in low-dose volumes (spinal cord organ-at-risk, OR). Differences between the planned and measured doses were recorded as a percentage of the planned dose. Differences were stable over time. Measurements with PINNACLE3 6.2b and Varian linacs showed a mean difference of 0.6% for PTVs (n=149, range, -4.3% to 6.6%), while OR measurements showed a larger systematic discrepancy (mean 4.5%, range -4.5% to 16.3%) that was due to well-known limitations of the MLC model in the earlier version of the planning system. Measurements with PINNACLE3 7.6c and Varian linacs demonstrated a mean difference of 0.2% for PTVs (n=160, range, -3.0%, to 5.0%) and -1.0% for ORs (range -5.8% to 4.4%). The capability index (ratio of specification range to range of the data) was 1.3 for the PTV data, indicating that almost

  12. Planetary micro-rover operations on Mars using a Bayesian framework for inference and control

    Science.gov (United States)

    Post, Mark A.; Li, Junquan; Quine, Brendan M.

    2016-03-01

    With the recent progress toward the application of commercially-available hardware to small-scale space missions, it is now becoming feasible for groups of small, efficient robots based on low-power embedded hardware to perform simple tasks on other planets in the place of large-scale, heavy and expensive robots. In this paper, we describe design and programming of the Beaver micro-rover developed for Northern Light, a Canadian initiative to send a small lander and rover to Mars to study the Martian surface and subsurface. For a small, hardware-limited rover to handle an uncertain and mostly unknown environment without constant management by human operators, we use a Bayesian network of discrete random variables as an abstraction of expert knowledge about the rover and its environment, and inference operations for control. A framework for efficient construction and inference into a Bayesian network using only the C language and fixed-point mathematics on embedded hardware has been developed for the Beaver to make intelligent decisions with minimal sensor data. We study the performance of the Beaver as it probabilistically maps a simple outdoor environment with sensor models that include uncertainty. Results indicate that the Beaver and other small and simple robotic platforms can make use of a Bayesian network to make intelligent decisions in uncertain planetary environments.

  13. High-resolution chronology for the Mesoamerican urban center of Teotihuacan derived from Bayesian statistics of radiocarbon and archaeological data

    Science.gov (United States)

    Beramendi-Orosco, Laura E.; Gonzalez-Hernandez, Galia; Urrutia-Fucugauchi, Jaime; Manzanilla, Linda R.; Soler-Arechalde, Ana M.; Goguitchaishvili, Avto; Jarboe, Nick

    2009-03-01

    A high-resolution 14C chronology for the Teopancazco archaeological site in the Teotihuacan urban center of Mesoamerica was generated by Bayesian analysis of 33 radiocarbon dates and detailed archaeological information related to occupation stratigraphy, pottery and archaeomagnetic dates. The calibrated intervals obtained using the Bayesian model are up to ca. 70% shorter than those obtained with individual calibrations. For some samples, this is a consequence of plateaus in the part of the calibration curve covered by the sample dates (2500 to 1450 14C yr BP). Effects of outliers are explored by comparing the results from a Bayesian model that incorporates radiocarbon data for two outlier samples with the same model excluding them. The effect of outliers was more significant than expected. Inclusion of radiocarbon dates from two altered contexts, 500 14C yr earlier than those for the first occupational phase, results in ages calculated by the model earlier than the archaeological records. The Bayesian chronology excluding these outliers separates the first two Teopancazco occupational phases and suggests that ending of the Xolalpan phase was around cal AD 550, 100 yr earlier than previously estimated and in accordance with previously reported archaeomagnetic dates from lime plasters for the same site.

  14. Bayesian inversion of marine controlled source electromagnetic data offshore Vancouver Island, Canada

    Science.gov (United States)

    Gehrmann, Romina A. S.; Schwalenberg, Katrin; Riedel, Michael; Spence, George D.; Spieß, Volkhard; Dosso, Stan E.

    2016-01-01

    This paper applies nonlinear Bayesian inversion to marine controlled source electromagnetic (CSEM) data collected near two sites of the Integrated Ocean Drilling Program (IODP) Expedition 311 on the northern Cascadia Margin to investigate subseafloor resistivity structure related to gas hydrate deposits and cold vents. The Cascadia margin, off the west coast of Vancouver Island, Canada, has a large accretionary prism where sediments are under pressure due to convergent plate boundary tectonics. Gas hydrate deposits and cold vent structures have previously been investigated by various geophysical methods and seabed drilling. Here, we invert time-domain CSEM data collected at Sites U1328 and U1329 of IODP Expedition 311 using Bayesian methods to derive subsurface resistivity model parameters and uncertainties. The Bayesian information criterion is applied to determine the amount of structure (number of layers in a depth-dependent model) that can be resolved by the data. The parameter space is sampled with the Metropolis-Hastings algorithm in principal-component space, utilizing parallel tempering to ensure wider and efficient sampling and convergence. Nonlinear inversion allows analysis of uncertain acquisition parameters such as time delays between receiver and transmitter clocks as well as input electrical current amplitude. Marginalizing over these instrument parameters in the inversion accounts for their contribution to the geophysical model uncertainties. One-dimensional inversion of time-domain CSEM data collected at measurement sites along a survey line allows interpretation of the subsurface resistivity structure. The data sets can be generally explained by models with 1 to 3 layers. Inversion results at U1329, at the landward edge of the gas hydrate stability zone, indicate a sediment unconformity as well as potential cold vents which were previously unknown. The resistivities generally increase upslope due to sediment erosion along the slope. Inversion

  15. Artificial Intelligence Approach to Support Statistical Quality Control Teaching

    Science.gov (United States)

    Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno

    2006-01-01

    Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…

  16. Using Statistical Process Control to Enhance Student Progression

    Science.gov (United States)

    Hanna, Mark D.; Raichura, Nilesh; Bernardes, Ednilson

    2012-01-01

    Public interest in educational outcomes has markedly increased in the most recent decade; however, quality management and statistical process control have not deeply penetrated the management of academic institutions. This paper presents results of an attempt to use Statistical Process Control (SPC) to identify a key impediment to continuous…

  17. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  18. Use of genomic DNA control features and predicted operon structure in microarray data analysis: ArrayLeaRNA – a Bayesian approach

    Directory of Open Access Journals (Sweden)

    Pin Carmen

    2007-11-01

    Full Text Available Abstract Background Microarrays are widely used for the study of gene expression; however deciding on whether observed differences in expression are significant remains a challenge. Results A computing tool (ArrayLeaRNA has been developed for gene expression analysis. It implements a Bayesian approach which is based on the Gumbel distribution and uses printed genomic DNA control features for normalization and for estimation of the parameters of the Bayesian model and prior knowledge from predicted operon structure. The method is compared with two other approaches: the classical LOWESS normalization followed by a two fold cut-off criterion and the OpWise method (Price, et al. 2006. BMC Bioinformatics. 7, 19, a published Bayesian approach also using predicted operon structure. The three methods were compared on experimental datasets with prior knowledge of gene expression. With ArrayLeaRNA, data normalization is carried out according to the genomic features which reflect the results of equally transcribed genes; also the statistical significance of the difference in expression is based on the variability of the equally transcribed genes. The operon information helps the classification of genes with low confidence measurements. ArrayLeaRNA is implemented in Visual Basic and freely available as an Excel add-in at http://www.ifr.ac.uk/safety/ArrayLeaRNA/ Conclusion We have introduced a novel Bayesian model and demonstrated that it is a robust method for analysing microarray expression profiles. ArrayLeaRNA showed a considerable improvement in data normalization, in the estimation of the experimental variability intrinsic to each hybridization and in the establishment of a clear boundary between non-changing and differentially expressed genes. The method is applicable to data derived from hybridizations of labelled cDNA samples as well as from hybridizations of labelled cDNA with genomic DNA and can be used for the analysis of datasets where

  19. Comparison of Three Statistical Downscaling Methods and Ensemble Downscaling Method Based on Bayesian Model Averaging in Upper Hanjiang River Basin, China

    Directory of Open Access Journals (Sweden)

    Jiaming Liu

    2016-01-01

    Full Text Available Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables to assess the hydrological impacts of climate change. To improve the simulation accuracy of downscaling methods, the Bayesian Model Averaging (BMA method combined with three statistical downscaling methods, which are support vector machine (SVM, BCC/RCG-Weather Generators (BCC/RCG-WG, and Statistics Downscaling Model (SDSM, is proposed in this study, based on the statistical relationship between the larger scale climate predictors and observed precipitation in upper Hanjiang River Basin (HRB. The statistical analysis of three performance criteria (the Nash-Sutcliffe coefficient of efficiency, the coefficient of correlation, and the relative error shows that the performance of ensemble downscaling method based on BMA for rainfall is better than that of each single statistical downscaling method. Moreover, the performance for the runoff modelled by the SWAT rainfall-runoff model using the downscaled daily rainfall by four methods is also compared, and the ensemble downscaling method has better simulation accuracy. The ensemble downscaling technology based on BMA can provide scientific basis for the study of runoff response to climate change.

  20. Bayesian Networks to Compare Pest Control Interventions on Commodities Along Agricultural Production Chains.

    Science.gov (United States)

    Holt, J; Leach, A W; Johnson, S; Tu, D M; Nhu, D T; Anh, N T; Quinlan, M M; Whittle, P J L; Mengersen, K; Mumford, J D

    2017-07-13

    The production of an agricultural commodity involves a sequence of processes: planting/growing, harvesting, sorting/grading, postharvest treatment, packing, and exporting. A Bayesian network has been developed to represent the level of potential infestation of an agricultural commodity by a specified pest along an agricultural production chain. It reflects the dependency of this infestation on the predicted level of pest challenge, the anticipated susceptibility of the commodity to the pest, the level of impact from pest control measures as designed, and any variation from that due to uncertainty in measure efficacy. The objective of this Bayesian network is to facilitate agreement between national governments of the exporters and importers on a set of phytosanitary measures to meet specific phytosanitary measure requirements to achieve target levels of protection against regulated pests. The model can be used to compare the performance of different combinations of measures under different scenarios of pest challenge, making use of available measure performance data. A case study is presented using a model developed for a fruit fly pest on dragon fruit in Vietnam; the model parameters and results are illustrative and do not imply a particular level of fruit fly infestation of these exports; rather, they provide the most likely, alternative, or worst-case scenarios of the impact of measures. As a means to facilitate agreement for trade, the model provides a framework to support communication between exporters and importers about any differences in perceptions of the risk reduction achieved by pest control measures deployed during the commodity production chain. © 2017 Society for Risk Analysis.

  1. Inference of Environmental Factor-Microbe and Microbe-Microbe Associations from Metagenomic Data Using a Hierarchical Bayesian Statistical Model.

    Science.gov (United States)

    Yang, Yuqing; Chen, Ning; Chen, Ting

    2017-01-25

    The inference of associations between environmental factors and microbes and among microbes is critical to interpreting metagenomic data, but compositional bias, indirect associations resulting from common factors, and variance within metagenomic sequencing data limit the discovery of associations. To account for these problems, we propose metagenomic Lognormal-Dirichlet-Multinomial (mLDM), a hierarchical Bayesian model with sparsity constraints, to estimate absolute microbial abundance and simultaneously infer both conditionally dependent associations among microbes and direct associations between microbes and environmental factors. We empirically show the effectiveness of the mLDM model using synthetic data, data from the TARA Oceans project, and a colorectal cancer dataset. Finally, we apply mLDM to 16S sequencing data from the western English Channel and report several associations. Our model can be used on both natural environmental and human metagenomic datasets, promoting the understanding of associations in the microbial community.

  2. IZI: INFERRING THE GAS PHASE METALLICITY (Z) AND IONIZATION PARAMETER (q) OF IONIZED NEBULAE USING BAYESIAN STATISTICS

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, Guillermo A. [Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101 (United States); Kewley, Lisa; Vogt, Frédéric P. A.; Dopita, Michael A. [Research School of Astronomy and Astrophysics, Australian National University, Cotter Road, Weston, ACT 2611 (Australia)

    2015-01-10

    We present a new method for inferring the metallicity (Z) and ionization parameter (q) of H II regions and star-forming galaxies using strong nebular emission lines (SELs). We use Bayesian inference to derive the joint and marginalized posterior probability density functions for Z and q given a set of observed line fluxes and an input photoionization model. Our approach allows the use of arbitrary sets of SELs and the inclusion of flux upper limits. The method provides a self-consistent way of determining the physical conditions of ionized nebulae that is not tied to the arbitrary choice of a particular SEL diagnostic and uses all the available information. Unlike theoretically calibrated SEL diagnostics, the method is flexible and not tied to a particular photoionization model. We describe our algorithm, validate it against other methods, and present a tool that implements it called IZI. Using a sample of nearby extragalactic H II regions, we assess the performance of commonly used SEL abundance diagnostics. We also use a sample of 22 local H II regions having both direct and recombination line (RL) oxygen abundance measurements in the literature to study discrepancies in the abundance scale between different methods. We find that oxygen abundances derived through Bayesian inference using currently available photoionization models in the literature can be in good (∼30%) agreement with RL abundances, although some models perform significantly better than others. We also confirm that abundances measured using the direct method are typically ∼0.2 dex lower than both RL and photoionization-model-based abundances.

  3. A simple 2D non-parametric resampling statistical approach to assess confidence in species identification in DNA barcoding--an alternative to likelihood and bayesian approaches.

    Science.gov (United States)

    Jin, Qian; He, Li-Jun; Zhang, Ai-Bing

    2012-01-01

    In the recent worldwide campaign for the global biodiversity inventory via DNA barcoding, a simple and easily used measure of confidence for assigning sequences to species in DNA barcoding has not been established so far, although the likelihood ratio test and the bayesian approach had been proposed to address this issue from a statistical point of view. The TDR (Two Dimensional non-parametric Resampling) measure newly proposed in this study offers users a simple and easy approach to evaluate the confidence of species membership in DNA barcoding projects. We assessed the validity and robustness of the TDR approach using datasets simulated under coalescent models, and an empirical dataset, and found that TDR measure is very robust in assessing species membership of DNA barcoding. In contrast to the likelihood ratio test and bayesian approach, the TDR method stands out due to simplicity in both concepts and calculations, with little in the way of restrictive population genetic assumptions. To implement this approach we have developed a computer program package (TDR1.0beta) freely available from ftp://202.204.209.200/education/video/TDR1.0beta.rar.

  4. Modular autopilot design and development featuring Bayesian non-parametric adaptive control

    Science.gov (United States)

    Stockton, Jacob

    Over the last few decades, Unmanned Aircraft Systems, or UAS, have become a critical part of the defense of our nation and the growth of the aerospace sector. UAS have a great potential for the agricultural industry, first response, and ecological monitoring. However, the wide range of applications require many mission-specific vehicle platforms. These platforms must operate reliably in a range of environments, and in presence of significant uncertainties. The accepted practice for enabling autonomously flying UAS today relies on extensive manual tuning of the UAS autopilot parameters, or time consuming approximate modeling of the dynamics of the UAS. These methods may lead to overly conservative controllers or excessive development times. A comprehensive approach to the development of an adaptive, airframe-independent controller is presented. The control algorithm leverages a nonparametric, Bayesian approach to adaptation, and is used as a cornerstone for the development of a new modular autopilot. Promising simulation results are presented for the adaptive controller, as well as, flight test results for the modular autopilot.

  5. Uniformity microsprinkler irrigation system using statistical quality control

    OpenAIRE

    Maurício Guy de Andrade; Marcio Antonio Vilas Boas; Jair Antonio Cruz Siqueira; Mireille Sato; Jonathan Dieter; Eliane Hermes; Erivelto Mercante

    2017-01-01

    ABSTRACT: The objective of this study was to evaluate the use of statistical quality control tools in the analysis of the uniformity of a microsprinkler irrigation system. For the analysis of irrigation Christiansen uniformity coefficient (CUC) and the distribution uniformity coefficient (DU) were statistically analyzed by means of the Shewhart control charts and process capability index (Cp). For the experiment 25 tests were carried out with a single micro sprinkler and subsequently seven di...

  6. Systematic and statistical errors in a bayesian approach to the estimation of the neutron-star equation of state using advanced gravitational wave detectors

    CERN Document Server

    Wade, Leslie; Ochsner, Evan; Lackey, Benjamin D; Farr, Benjamin F; Littenberg, Tyson B; Raymond, Vivien

    2014-01-01

    Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

  7. Robust Control Methods for On-Line Statistical Learning

    Directory of Open Access Journals (Sweden)

    Capobianco Enrico

    2001-01-01

    Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.

  8. Epistatic module detection for case-control studies: a Bayesian model with a Gibbs sampling strategy.

    Directory of Open Access Journals (Sweden)

    Wanwan Tang

    2009-05-01

    Full Text Available The detection of epistatic interactive effects of multiple genetic variants on the susceptibility of human complex diseases is a great challenge in genome-wide association studies (GWAS. Although methods have been proposed to identify such interactions, the lack of an explicit definition of epistatic effects, together with computational difficulties, makes the development of new methods indispensable. In this paper, we introduce epistatic modules to describe epistatic interactive effects of multiple loci on diseases. On the basis of this notion, we put forward a Bayesian marker partition model to explain observed case-control data, and we develop a Gibbs sampling strategy to facilitate the detection of epistatic modules. Comparisons of the proposed approach with three existing methods on seven simulated disease models demonstrate the superior performance of our approach. When applied to a genome-wide case-control data set for Age-related Macular Degeneration (AMD, the proposed approach successfully identifies two known susceptible loci and suggests that a combination of two other loci -- one in the gene SGCD and the other in SCAPER -- is associated with the disease. Further functional analysis supports the speculation that the interaction of these two genetic variants may be responsible for the susceptibility of AMD. When applied to a genome-wide case-control data set for Parkinson's disease, the proposed method identifies seven suspicious loci that may contribute independently to the disease.

  9. IZI: Inferring the Gas Phase Metallicity (Z) and Ionization Parameter (q) of Ionized Nebulae using Bayesian Statistics

    CERN Document Server

    Blanc, Guillermo A; Vogt, Frédéric P A; Dopita, Michael A

    2014-01-01

    We present a new method for inferring the metallicity (Z) and ionization parameter (q) of HII regions and star-forming galaxies using strong nebular emission lines (SEL). We use Bayesian inference to derive the joint and marginalized posterior probability density functions for Z and q given a set of observed line fluxes and an input photo-ionization model. Our approach allows the use of arbitrary sets of SELs and the inclusion of flux upper limits. The method provides a self-consistent way of determining the physical conditions of ionized nebulae that is not tied to the arbitrary choice of a particular SEL diagnostic and uses all the available information. Unlike theoretically calibrated SEL diagnostics the method is flexible and not tied to a particular photo-ionization model. We describe our algorithm, validate it against other methods, and present a tool that implements it called IZI. Using a sample of nearby extra-galactic HII regions we assess the performance of commonly used SEL abundance diagnostics. W...

  10. STATISTICAL CONTROL OF PROCESSES AND PRODUCTS IN AGRICULTURE

    Directory of Open Access Journals (Sweden)

    D. Horvat

    2006-06-01

    Full Text Available Fundamental concept of statistical process control is based on decision-making about the process on the basis of comparison of data collected from process with calculated control limits. Statistical process and quality control of agricultural products is used to provide agricultural products that will satisfy customer requirements in a view of quality pretension as well as costumer requirements in a cost price. In accordance with ISO 9000, quality standards for process and products are defined. There are many institutions in Croatia that work in accordance with these standards. Implementation of statistical process control and usage of a control charts can greatly help in convergence to the standards and in decreasing of production costs. To illustrate the above mentioned we tested a work quality of a nozzle at the eighteen meter clutch sprayer.

  11. A statistical assessment of seismic models of the U.S. continental crust using Bayesian inversion of ambient noise surface wave dispersion data

    Science.gov (United States)

    Olugboji, T. M.; Lekic, V.; McDonough, W.

    2017-07-01

    We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.

  12. The accuracy and clinical feasibility of a new Bayesian-based closed-loop control system for propofol administration using the bispectral index as a controlled variable

    NARCIS (Netherlands)

    De Smet, Tom; Struys, Michel M. R. F.; Neckebroek, Martine M.; Van den Hauwe, Kristof; Bonte, Sjoert; Mortier, Eric P.

    2008-01-01

    BACKGROUND: Closed-loop control of the hypnotic component of anesthesia has been proposed in an attempt to optimize drug delivery. Here, we introduce a newly developed Bayesian-based, patient-individualized, model-based, adaptive control method for bispectral index (BIS) guided propofol infusion

  13. STATISTIC LINEARIZATION CONTROL FOR HYDRAULIC ACTIVE DAMPING SUSPENSION

    Institute of Scientific and Technical Information of China (English)

    Wang Qingfeng; Zhao Ju; Yang Botao

    2000-01-01

    A statistic linearization analysis method of bad nolinear hydraulic active damping suspensiop is provided.Also the optimum control strategy of semi-active suspension and graded control strategy based on it are puted forward.Experimental researches are carried out on a 2 DOF (degree of freedom ) hydraulic active damping suspension test system.The results showed that an excellent control effectiveness could be obtained by using statistic linearization optimum control which unfortunely requests continuously regulationg the damp in an accurate way and costs much in engeering application.On the contrary,the results also showed that graded control is more practicable which has a control effectiveness close to the optimum control and costs less.

  14. Manufacturing Squares: An Integrative Statistical Process Control Exercise

    Science.gov (United States)

    Coy, Steven P.

    2016-01-01

    In the exercise, students in a junior-level operations management class are asked to manufacture a simple product. Given product specifications, they must design a production process, create roles and design jobs for each team member, and develop a statistical process control plan that efficiently and effectively controls quality during…

  15. Statistical Design Model (SDM) of satellite thermal control subsystem

    Science.gov (United States)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  16. Multivariate Statistical Process Control Process Monitoring Methods and Applications

    CERN Document Server

    Ge, Zhiqiang

    2013-01-01

      Given their key position in the process control industry, process monitoring techniques have been extensively investigated by industrial practitioners and academic control researchers. Multivariate statistical process control (MSPC) is one of the most popular data-based methods for process monitoring and is widely used in various industrial areas. Effective routines for process monitoring can help operators run industrial processes efficiently at the same time as maintaining high product quality. Multivariate Statistical Process Control reviews the developments and improvements that have been made to MSPC over the last decade, and goes on to propose a series of new MSPC-based approaches for complex process monitoring. These new methods are demonstrated in several case studies from the chemical, biological, and semiconductor industrial areas.   Control and process engineers, and academic researchers in the process monitoring, process control and fault detection and isolation (FDI) disciplines will be inter...

  17. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation.

    Science.gov (United States)

    Bauer, Robert; Gharabaghi, Alireza

    2015-01-01

    Restorative brain-computer interfaces (BCI) are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation. In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

  18. Reinforcement learning for adaptive threshold control of restorative brain-computer interfaces: a Bayesian simulation

    Directory of Open Access Journals (Sweden)

    Robert eBauer

    2015-02-01

    Full Text Available Restorative brain-computer interfaces (BCI are increasingly used to provide feedback of neuronal states in a bid to normalize pathological brain activity and achieve behavioral gains. However, patients and healthy subjects alike often show a large variability, or even inability, of brain self-regulation for BCI control, known as BCI illiteracy. Although current co-adaptive algorithms are powerful for assistive BCIs, their inherent class switching clashes with the operant conditioning goal of restorative BCIs. Moreover, due to the treatment rationale, the classifier of restorative BCIs usually has a constrained feature space, thus limiting the possibility of classifier adaptation.In this context, we applied a Bayesian model of neurofeedback and reinforcement learning for different threshold selection strategies to study the impact of threshold adaptation of a linear classifier on optimizing restorative BCIs. For each feedback iteration, we first determined the thresholds that result in minimal action entropy and maximal instructional efficiency. We then used the resulting vector for the simulation of continuous threshold adaptation. We could thus show that threshold adaptation can improve reinforcement learning, particularly in cases of BCI illiteracy. Finally, on the basis of information-theory, we provided an explanation for the achieved benefits of adaptive threshold setting.

  19. No control genes required: Bayesian analysis of qRT-PCR data.

    Directory of Open Access Journals (Sweden)

    Mikhail V Matz

    Full Text Available BACKGROUND: Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. RESULTS: In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts. Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. CONCLUSIONS: Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been

  20. Statistical process control methods for expert system performance monitoring.

    Science.gov (United States)

    Kahn, M G; Bailey, T C; Steib, S A; Fraser, V J; Dunagan, W C

    1996-01-01

    The literature on the performance evaluation of medical expert system is extensive, yet most of the techniques used in the early stages of system development are inappropriate for deployed expert systems. Because extensive clinical and informatics expertise and resources are required to perform evaluations, efficient yet effective methods of monitoring performance during the long-term maintenance phase of the expert system life cycle must be devised. Statistical process control techniques provide a well-established methodology that can be used to define policies and procedures for continuous, concurrent performance evaluation. Although the field of statistical process control has been developed for monitoring industrial processes, its tools, techniques, and theory are easily transferred to the evaluation of expert systems. Statistical process tools provide convenient visual methods and heuristic guidelines for detecting meaningful changes in expert system performance. The underlying statistical theory provides estimates of the detection capabilities of alternative evaluation strategies. This paper describes a set of statistical process control tools that can be used to monitor the performance of a number of deployed medical expert systems. It describes how p-charts are used in practice to monitor the GermWatcher expert system. The case volume and error rate of GermWatcher are then used to demonstrate how different inspection strategies would perform.

  1. Uniformity microsprinkler irrigation system using statistical quality control

    Directory of Open Access Journals (Sweden)

    Maurício Guy de Andrade

    Full Text Available ABSTRACT: The objective of this study was to evaluate the use of statistical quality control tools in the analysis of the uniformity of a microsprinkler irrigation system. For the analysis of irrigation Christiansen uniformity coefficient (CUC and the distribution uniformity coefficient (DU were statistically analyzed by means of the Shewhart control charts and process capability index (Cp. For the experiment 25 tests were carried out with a single micro sprinkler and subsequently seven different spacing between micro sprinklers were simulated. Control charts contributed to the diagnosis of the treatments to be under control and with satisfactory uniformity outcomes. Increase in process capability index was directly proportional to the average of CUC and DU.

  2. Statistical Data Mining for Efficient Quality Control in Manufacturing

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben Steen

    2015-01-01

    of the process e.g sensor measurements, machine readings etc, and the major contributor of these big data sets are different quality control processes. In this article we will present methodology to extract valuable insight from manufacturing data. The proposed methodology is based on comparison of probabilities...... and extension of likelihood principles in statistics as a performance function for Genetic Algorithm....

  3. Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Skolkovo Inst. of Science and Technology, Moscow (Russia); Chernyak, Vladimir [Wayne State Univ., Detroit, MI (United States). Dept. of Chemistry

    2017-01-17

    Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxation and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.

  4. Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach.

    Science.gov (United States)

    Chertkov, Michael; Chernyak, Vladimir

    2017-08-17

    Thermostatically controlled loads, e.g., air conditioners and heaters, are by far the most widespread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control - changing from on to off, and vice versa, depending on temperature. We considered aggregation of a large group of similar devices into a statistical ensemble, where the devices operate following the same dynamics, subject to stochastic perturbations and randomized, Poisson on/off switching policy. Using theoretical and computational tools of statistical physics, we analyzed how the ensemble relaxes to a stationary distribution and established a relationship between the relaxation and the statistics of the probability flux associated with devices' cycling in the mixed (discrete, switch on/off, and continuous temperature) phase space. This allowed us to derive the spectrum of the non-equilibrium (detailed balance broken) statistical system and uncover how switching policy affects oscillatory trends and the speed of the relaxation. Relaxation of the ensemble is of practical interest because it describes how the ensemble recovers from significant perturbations, e.g., forced temporary switching off aimed at utilizing the flexibility of the ensemble to provide "demand response" services to change consumption temporarily to balance a larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.

  5. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  6. A Statistical Project Control Tool for Engineering Managers

    Science.gov (United States)

    Bauch, Garland T.

    2001-01-01

    This slide presentation reviews the use of a Statistical Project Control Tool (SPCT) for managing engineering projects. A literature review pointed to a definition of project success, (i.e., A project is successful when the cost, schedule, technical performance, and quality satisfy the customer.) The literature review also pointed to project success factors, and traditional project control tools, and performance measures that are detailed in the report. The essential problem is that with resources becoming more limited, and an increasing number or projects, project failure is increasing, there is a limitation of existing methods and systematic methods are required. The objective of the work is to provide a new statistical project control tool for project managers. Graphs using the SPCT method plotting results of 3 successful projects and 3 failed projects are reviewed, with success and failure being defined by the owner.

  7. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  8. Preliminary statistical assessment towards characterization of biobotic control.

    Science.gov (United States)

    Latif, Tahmid; Meng Yang; Lobaton, Edgar; Bozkurt, Alper

    2016-08-01

    Biobotic research involving neurostimulation of instrumented insects to control their locomotion is finding potential as an alternative solution towards development of centimeter-scale distributed swarm robotics. To improve the reliability of biobotic agents, their control mechanism needs to be precisely characterized. To achieve this goal, this paper presents our initial efforts for statistical assessment of the angular response of roach biobots to the applied bioelectrical stimulus. Subsequent findings can help to understand the effect of each stimulation parameter individually or collectively and eventually reach reliable and consistent biobotic control suitable for real life scenarios.

  9. Monitoring Software Reliability using Statistical Process Control: An MMLE Approach

    Directory of Open Access Journals (Sweden)

    Bandla Sreenivasa Rao

    2011-11-01

    Full Text Available This paper consider an MMLE (Modified Maximum Likelihood Estimation based scheme to estimatesoftware reliability using exponential distribution. The MMLE is one of the generalized frameworks ofsoftware reliability models of Non Homogeneous Poisson Processes (NHPPs. The MMLE givesanalytical estimators rather than an iterative approximation to estimate the parameters. In this paper weproposed SPC (Statistical Process Control Charts mechanism to determine the software quality usinginter failure times data. The Control charts can be used to measure whether the software process isstatistically under control or not.

  10. Mapping malaria risk among children in Côte d’Ivoire using Bayesian geo-statistical models

    Directory of Open Access Journals (Sweden)

    Raso Giovanna

    2012-05-01

    Full Text Available Abstract Background In Côte d’Ivoire, an estimated 767,000 disability-adjusted life years are due to malaria, placing the country at position number 14 with regard to the global burden of malaria. Risk maps are important to guide control interventions, and hence, the aim of this study was to predict the geographical distribution of malaria infection risk in children aged Methods Using different data sources, a systematic review was carried out to compile and geo-reference survey data on Plasmodium spp. infection prevalence in Côte d’Ivoire, focusing on children aged Plasmodium spp. infection risk for entire Côte d’Ivoire, including uncertainty. Results Overall, 235 data points at 170 unique survey locations with malaria prevalence data for individuals aged Conclusion The malaria risk map at high spatial resolution gives an important overview of the geographical distribution of the disease in Côte d’Ivoire. It is a useful tool for the national malaria control programme and can be utilized for spatial targeting of control interventions and rational resource allocation.

  11. Bayesian feedback control of a two-atom spin-state in an atom-cavity system

    CERN Document Server

    Brakhane, Stefan; Kampschulte, Tobias; Martinez-Dorantes, Miguel; Reimann, René; Yoon, Seokchan; Widera, Artur; Meschede, Dieter

    2012-01-01

    We experimentally demonstrate real-time feedback control of the joint spin-state of two neutral Caesium atoms inside a high finesse optical cavity. The quantum states are discriminated by their different cavity transmission levels. A Bayesian update formalism is used to estimate state occupation probabilities as well as transition rates. We stabilize the balanced two-atom mixed state, which is deterministically inaccessible, via feedback control and find very good agreement with Monte-Carlo simulations. On average, the feedback loops achieves near optimal conditions by steering the system to the target state marginally exceeding the time to retrieve information about its state.

  12. PROCESS VARIABILITY REDUCTION THROUGH STATISTICAL PROCESS CONTROL FOR QUALITY IMPROVEMENT

    Directory of Open Access Journals (Sweden)

    B.P. Mahesh

    2010-09-01

    Full Text Available Quality has become one of the most important customer decision factors in the selection among the competing product and services. Consequently, understanding and improving quality is a key factor leading to business success, growth and an enhanced competitive position. Hence quality improvement program should be an integral part of the overall business strategy. According to TQM, the effective way to improve the Quality of the product or service is to improve the process used to build the product. Hence, TQM focuses on process, rather than results as the results are driven by the processes. Many techniques are available for quality improvement. Statistical Process Control (SPC is one such TQM technique which is widely accepted for analyzing quality problems and improving the performance of the production process. This article illustrates the step by step procedure adopted at a soap manufacturing company to improve the Quality by reducing process variability using Statistical Process Control.

  13. Statistical disclosure control for microdata methods and applications in R

    CERN Document Server

    Templ, Matthias

    2017-01-01

    This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the data before release. This book is in...

  14. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2009-01-01

    Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  15. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  16. A statistical combustion phase control approach of SI engines

    Science.gov (United States)

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2017-02-01

    In order to maximize the performance of internal combustion engine, combustion phase is usually controlled to track its desired reference. However, suffering from the cyclic variability of combustion, it is difficulty but meaningful to control mean of combustion phase and constrain its variance. As a combustion phase indicator, the location of peak pressure (LPP) is utilized for real-time combustion phase control in this research. The purpose of the proposed method is to ensure the mean of LPP statistically tracks its reference and constrains the standard deviation of LPP distribution. To achieve this, LPP is first calculated based on the cylinder pressure sensor, and its characteristics are analyzed at the steady-state operating condition, then the distribution of LPP is examined online using hypothesis test criterion. On the basis of the presented statistical algorithm, current mean of LPP is applied in the feedback channel for designing spark advance adjustment law, and the stability of closed-loop system is theoretically ensured according to a steady statistical model. Finally, the proposed strategy is verified on a spark ignition gasoline engine.

  17. Application of statistical process control to qualitative molecular diagnostic assays.

    Directory of Open Access Journals (Sweden)

    Cathal P O'brien

    2014-11-01

    Full Text Available Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control. Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply statistical process control to assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater samples with a resultant protracted time to detection. Modelled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of statistical process control to qualitative laboratory data.

  18. Statistical issues in randomised controlled trials: a narrative synthesis

    Directory of Open Access Journals (Sweden)

    Bolaji Emmanuel Egbewale

    2015-05-01

    Full Text Available Randomised controlled trials (RCT s are gold standard in the evaluation of treatment efficacy in medical investigations, only if well designed and implemented. Till date, distorted views and misapplications of statistical procedures involved in RCTs are still in practice. Hence, clarification of concepts and acceptable practices related to certain statistical issues involved in the design, conduct and reporting of randomised controlled trials is needed. This narrative synthesis aimed at providing succinct but clear information on the concepts and practices of selected statistical issues in RCT s to inform correct applications. The use of tests of significance is no longer acceptable as means to compare baseline similarity between treatment groups and in determining which covariate(s should be included in the model for adjustment. Distribution of baseline attributes simply presented in tabular form is however, rather preferred. Regarding covariate selection, such approach that makes use of information on the degree of correlation between the covariate(s and the outcome variable is more in tandem with statistical principle(s than that based on tests of significance. Stratification and minimisation are not alternatives to covariate adjusted analysis; in fact they establish the need for one. Intention-to-treat is the preferred approach for the evaluation of primary outcome measures and researchers have responsibility to report whether or not the procedure was followed. A major use of results from subgroup analysis is to generate hypothesis for future clinical trials. Since RCT s are gold standard in the comparison of medical interventions, researchers cannot afford the practices of distorted allocation or statistical procedures in this all important experimental design method.

  19. Statistical issues in randomised controlled trials: a narrative synthesis

    Institute of Scientific and Technical Information of China (English)

    Bolaji; Emmanuel; Egbewale

    2015-01-01

    Randomised controlled trials(RCTs) are gold standard in the evaluation of treatment efficacy in medical investigations, only if well designed and implemented. Till date, distorted views and misapplications of statistical procedures involved in RCTs are still in practice. Hence, clarification of concepts and acceptable practices related to certain statistical issues involved in the design, conduct and reporting of randomised controlled trials is needed. This narrative synthesis aimed at providing succinct but clear information on the concepts and practices of selected statistical issues in RCTs to inform correct applications. The use of tests of significance is no longer acceptable as means to compare baseline similarity between treatment groups and in determining which covariate(s) should be included in the model for adjustment. Distribution of baseline attributes simply presented in tabular form is however, rather preferred. Regarding covariate selection, such approach that makes use of information on the degree of correlation between the covariate(s) and the outcome variable is more in tandem with statistical principle(s) than that based on tests of significance. Stratification and minimisation are not alternatives to covariate adjusted analysis; in fact they establish the need for one. Intention-totreat is the preferred approach for the evaluation of primary outcome measures and researchers have responsibility to report whether or not the procedure was followed. A major use of results from subgroup analysis is to generate hypothesis for future clinical trials. Since RCTs are gold standard in the comparison of medical interventions, researchers cannot afford the practices of distorted allocation or statistical procedures in this all important experimental design method.

  20. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    Science.gov (United States)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  1. Bayesian Inference in Polling Technique: 1992 Presidential Polls.

    Science.gov (United States)

    Satake, Eiki

    1994-01-01

    Explores the potential utility of Bayesian statistical methods in determining the predictability of multiple polls. Compares Bayesian techniques to the classical statistical method employed by pollsters. Considers these questions in the context of the 1992 presidential elections. (HB)

  2. Application of Statistical Process Control Methods for IDS

    Directory of Open Access Journals (Sweden)

    Muhammad Sadiq Ali Khan

    2012-11-01

    Full Text Available As technology improves, attackers are trying to get access to the network system resources by so many means. Open loop holes in the network allow them to penetrate in the network more easily; statistical methods have great importance in the area of computer and network security, in detecting the malfunctioning of the network system. Development of internet security solution needed to protect the system and to with stand prolonged and diverse attack. In this paper Statistical approach has been used, conventionally Statistical Control Charts has been used for quality characteristics however in IDS abnormal access can be easily detected and appropriate control limit can be established. Two different charts are investigated and Shewhart chart based on average has produced better accuracy. The approach used here for intrusion detection in such a way that if the data packet is drastically different from normal variation then it can be classified as attack. In other words a system variation may be due to some special reason. If these causes are investigated then natural variation and abnormal variation can be distinguished which can be used for distinction of behaviors of the system.

  3. Projection of future climate change conditions using IPCC simulations, neural networks and Bayesian statistics. Part 2: Precipitation mean state and seasonal cycle in South America

    Energy Technology Data Exchange (ETDEWEB)

    Boulanger, Jean-Philippe [LODYC, UMR CNRS/IRD/UPMC, Tour 45-55/Etage 4/Case 100, UPMC, Paris Cedex 05 (France); University of Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Martinez, Fernando; Segura, Enrique C. [University of Buenos Aires, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina)

    2007-02-15

    Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere-Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the &apos

  4. Comparison of Bayesian and classical methods in the analysis of cluster randomized controlled trials with a binary outcome: the Community Hypertension Assessment Trial (CHAT).

    Science.gov (United States)

    Ma, Jinhui; Thabane, Lehana; Kaczorowski, Janusz; Chambers, Larry; Dolovich, Lisa; Karwalajtys, Tina; Levitt, Cheryl

    2009-06-16

    Cluster randomized trials (CRTs) are increasingly used to assess the effectiveness of interventions to improve health outcomes or prevent diseases. However, the efficiency and consistency of using different analytical methods in the analysis of binary outcome have received little attention. We described and compared various statistical approaches in the analysis of CRTs using the Community Hypertension Assessment Trial (CHAT) as an example. The CHAT study was a cluster randomized controlled trial aimed at investigating the effectiveness of pharmacy-based blood pressure clinics led by peer health educators, with feedback to family physicians (CHAT intervention) against Usual Practice model (Control), on the monitoring and management of BP among older adults. We compared three cluster-level and six individual-level statistical analysis methods in the analysis of binary outcomes from the CHAT study. The three cluster-level analysis methods were: i) un-weighted linear regression, ii) weighted linear regression, and iii) random-effects meta-regression. The six individual level analysis methods were: i) standard logistic regression, ii) robust standard errors approach, iii) generalized estimating equations, iv) random-effects meta-analytic approach, v) random-effects logistic regression, and vi) Bayesian random-effects regression. We also investigated the robustness of the estimates after the adjustment for the cluster and individual level covariates. Among all the statistical methods assessed, the Bayesian random-effects logistic regression method yielded the widest 95% interval estimate for the odds ratio and consequently led to the most conservative conclusion. However, the results remained robust under all methods - showing sufficient evidence in support of the hypothesis of no effect for the CHAT intervention against Usual Practice control model for management of blood pressure among seniors in primary care. The individual-level standard logistic regression is the

  5. Comparison of Bayesian and classical methods in the analysis of cluster randomized controlled trials with a binary outcome: The Community Hypertension Assessment Trial (CHAT

    Directory of Open Access Journals (Sweden)

    Dolovich Lisa

    2009-06-01

    Full Text Available Abstract Background Cluster randomized trials (CRTs are increasingly used to assess the effectiveness of interventions to improve health outcomes or prevent diseases. However, the efficiency and consistency of using different analytical methods in the analysis of binary outcome have received little attention. We described and compared various statistical approaches in the analysis of CRTs using the Community Hypertension Assessment Trial (CHAT as an example. The CHAT study was a cluster randomized controlled trial aimed at investigating the effectiveness of pharmacy-based blood pressure clinics led by peer health educators, with feedback to family physicians (CHAT intervention against Usual Practice model (Control, on the monitoring and management of BP among older adults. Methods We compared three cluster-level and six individual-level statistical analysis methods in the analysis of binary outcomes from the CHAT study. The three cluster-level analysis methods were: i un-weighted linear regression, ii weighted linear regression, and iii random-effects meta-regression. The six individual level analysis methods were: i standard logistic regression, ii robust standard errors approach, iii generalized estimating equations, iv random-effects meta-analytic approach, v random-effects logistic regression, and vi Bayesian random-effects regression. We also investigated the robustness of the estimates after the adjustment for the cluster and individual level covariates. Results Among all the statistical methods assessed, the Bayesian random-effects logistic regression method yielded the widest 95% interval estimate for the odds ratio and consequently led to the most conservative conclusion. However, the results remained robust under all methods – showing sufficient evidence in support of the hypothesis of no effect for the CHAT intervention against Usual Practice control model for management of blood pressure among seniors in primary care. The

  6. Monte Carlo Bayesian Inference on a Statistical Model of Sub-gridcolumn Moisture Variability Using High-resolution Cloud Observations . Part II; Sensitivity Tests and Results

    Science.gov (United States)

    da Silva, Arlindo M.; Norris, Peter M.

    2013-01-01

    Part I presented a Monte Carlo Bayesian method for constraining a complex statistical model of GCM sub-gridcolumn moisture variability using high-resolution MODIS cloud data, thereby permitting large-scale model parameter estimation and cloud data assimilation. This part performs some basic testing of this new approach, verifying that it does indeed significantly reduce mean and standard deviation biases with respect to the assimilated MODIS cloud optical depth, brightness temperature and cloud top pressure, and that it also improves the simulated rotational-Ramman scattering cloud optical centroid pressure (OCP) against independent (non-assimilated) retrievals from the OMI instrument. Of particular interest, the Monte Carlo method does show skill in the especially difficult case where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach allows finite jumps into regions of non-zero cloud probability. In the example provided, the method is able to restore marine stratocumulus near the Californian coast where the background state has a clear swath. This paper also examines a number of algorithmic and physical sensitivities of the new method and provides guidance for its cost-effective implementation. One obvious difficulty for the method, and other cloud data assimilation methods as well, is the lack of information content in the cloud observables on cloud vertical structure, beyond cloud top pressure and optical thickness, thus necessitating strong dependence on the background vertical moisture structure. It is found that a simple flow-dependent correlation modification due to Riishojgaard (1998) provides some help in this respect, by better honoring inversion structures in the background state.

  7. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  8. Bayesian estimation of prevalence of paratuberculosis in dairy herds enrolled in a voluntary Johne's Disease Control Programme in Ireland.

    Science.gov (United States)

    McAloon, Conor G; Doherty, Michael L; Whyte, Paul; O'Grady, Luke; More, Simon J; Messam, Locksley L McV; Good, Margaret; Mullowney, Peter; Strain, Sam; Green, Martin J

    2016-06-01

    Bovine paratuberculosis is a disease characterised by chronic granulomatous enteritis which manifests clinically as a protein-losing enteropathy causing diarrhoea, hypoproteinaemia, emaciation and, eventually death. Some evidence exists to suggest a possible zoonotic link and a national voluntary Johne's Disease Control Programme was initiated by Animal Health Ireland in 2013. The objective of this study was to estimate herd-level true prevalence (HTP) and animal-level true prevalence (ATP) of paratuberculosis in Irish herds enrolled in the national voluntary JD control programme during 2013-14. Two datasets were used in this study. The first dataset had been collected in Ireland during 2005 (5822 animals from 119 herds), and was used to construct model priors. Model priors were updated with a primary (2013-14) dataset which included test records from 99,101 animals in 1039 dairy herds and was generated as part of the national voluntary JD control programme. The posterior estimate of HTP from the final Bayesian model was 0.23-0.34 with a 95% probability. Across all herds, the median ATP was found to be 0.032 (0.009, 0.145). This study represents the first use of Bayesian methodology to estimate the prevalence of paratuberculosis in Irish dairy herds. The HTP estimate was higher than previous Irish estimates but still lower than estimates from other major dairy producing countries.

  9. Statistical Process Control of a Kalman Filter Model

    Science.gov (United States)

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A.

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  10. Statistical Process Control of a Kalman Filter Model

    Directory of Open Access Journals (Sweden)

    Sonja Gamse

    2014-09-01

    Full Text Available For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  11. Statistical process control of a Kalman filter model.

    Science.gov (United States)

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-09-26

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  12. The Bayesian Inventory Problem

    Science.gov (United States)

    1984-05-01

    Bayesian Approach to Demand Estimation and Inventory Provisioning," Naval Research Logistics Quarterly. Vol 20, 1973, (p607-624). 4 DeGroot , Morris H...page is blank APPENDIX A SUFFICIENT STATISTICS A convenient reference for moat of this material is DeGroot (41. Su-pose that we are sampling from a

  13. Stochastic processes and feedback-linearisation for online identification and Bayesian adaptive control of fully-actuated mechanical systems

    CERN Document Server

    Calliess, Jan-Peter; Roberts, Stephen J

    2013-01-01

    This work proposes a new method for simultaneous probabilistic identification and control of an observable, fully-actuated mechanical system. Identification is achieved by conditioning stochastic process priors on observations of configurations and noisy estimates of configuration derivatives. In contrast to previous work that has used stochastic processes for identification, we leverage the structural knowledge afforded by Lagrangian mechanics and learn the drift and control input matrix functions of the control-affine system separately. We utilise feedback-linearisation to reduce, in expectation, the uncertain nonlinear control problem to one that is easy to regulate in a desired manner. Thereby, our method combines the flexibility of nonparametric Bayesian learning with epistemological guarantees on the expected closed-loop trajectory. We illustrate our method in the context of torque-actuated pendula where the dynamics are learned with a combination of normal and log-normal processes.

  14. Statistical and Variational Methods for Problems in Visual Control

    Science.gov (United States)

    2009-03-02

    invariant visual tracking by particle filtering" (with A. Nakhmani), SPIE, 2008. 81. "Adaptive Bayesian shrinkage model for spherical wavelet based denoising ...vision. In recent work, we have described a random particle system, evolving on the discretized unit circle, whose profile converges toward the Gauss...34). Specifically, we have chosen to smooth by evolving P° according to a discretized version of the partial differential equation r apt -^ = {{P$)2PL

  15. Bayesian Watermark Attacks

    OpenAIRE

    Shterev, Ivo; Dunson, David

    2012-01-01

    This paper presents an application of statistical machine learning to the field of watermarking. We propose a new attack model on additive spread-spectrum watermarking systems. The proposed attack is based on Bayesian statistics. We consider the scenario in which a watermark signal is repeatedly embedded in specific, possibly chosen based on a secret message bitstream, segments (signals) of the host data. The host signal can represent a patch of pixels from an image or a video frame. We propo...

  16. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  17. Structure Learning in Bayesian Sensorimotor Integration.

    Directory of Open Access Journals (Sweden)

    Tim Genewein

    2015-08-01

    Full Text Available Previous studies have shown that sensorimotor processing can often be described by Bayesian learning, in particular the integration of prior and feedback information depending on its degree of reliability. Here we test the hypothesis that the integration process itself can be tuned to the statistical structure of the environment. We exposed human participants to a reaching task in a three-dimensional virtual reality environment where we could displace the visual feedback of their hand position in a two dimensional plane. When introducing statistical structure between the two dimensions of the displacement, we found that over the course of several days participants adapted their feedback integration process in order to exploit this structure for performance improvement. In control experiments we found that this adaptation process critically depended on performance feedback and could not be induced by verbal instructions. Our results suggest that structural learning is an important meta-learning component of Bayesian sensorimotor integration.

  18. Bayesian demography 250 years after Bayes.

    Science.gov (United States)

    Bijak, Jakub; Bryant, John

    2016-01-01

    Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms.

  19. LOWER LEVEL INFERENCE CONTROL IN STATISTICAL DATABASE SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    Lipton, D.L.; Wong, H.K.T.

    1984-02-01

    An inference is the process of transforming unclassified data values into confidential data values. Most previous research in inference control has studied the use of statistical aggregates to deduce individual records. However, several other types of inference are also possible. Unknown functional dependencies may be apparent to users who have 'expert' knowledge about the characteristics of a population. Some correlations between attributes may be concluded from 'commonly-known' facts about the world. To counter these threats, security managers should use random sampling of databases of similar populations, as well as expert systems. 'Expert' users of the DATABASE SYSTEM may form inferences from the variable performance of the user interface. Users may observe on-line turn-around time, accounting statistics. the error message received, and the point at which an interactive protocol sequence fails. One may obtain information about the frequency distributions of attribute values, and the validity of data object names from this information. At the back-end of a database system, improved software engineering practices will reduce opportunities to bypass functional units of the database system. The term 'DATA OBJECT' should be expanded to incorporate these data object types which generate new classes of threats. The security of DATABASES and DATABASE SySTEMS must be recognized as separate but related problems. Thus, by increased awareness of lower level inferences, system security managers may effectively nullify the threat posed by lower level inferences.

  20. Statistically Controlling for Confounding Constructs Is Harder than You Think.

    Directory of Open Access Journals (Sweden)

    Jacob Westfall

    Full Text Available Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (unreliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest--in some cases approaching 100%--when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/ that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity.

  1. Application of statistical process control to qualitative molecular diagnostic assays

    LENUS (Irish Health Repository)

    O'Brien, Cathal P.

    2014-11-01

    Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control (SPC). Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply SPC to an assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater sample numbers to mitigate a protracted time to detection. Modeled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of SPC to qualitative laboratory data.

  2. Bayesian analysis of a mastitis control plan to investigate the influence of veterinary prior beliefs on clinical interpretation

    Science.gov (United States)

    Green, M.J.; Browne, W.J.; Green, L.E.; Bradley, A.J.; Leach, K.A.; Breen, J.E.; Medley, G.F.

    2009-01-01

    The fundamental objective for health research is to determine whether changes should be made to clinical decisions. Decisions made by veterinary surgeons in the light of new research evidence are known to be influenced by their prior beliefs, especially their initial opinions about the plausibility of possible results. In this paper, clinical trial results for a bovine mastitis control plan were evaluated within a Bayesian context, to incorporate a community of prior distributions that represented a spectrum of clinical prior beliefs. The aim was to quantify the effect of veterinary surgeons’ initial viewpoints on the interpretation of the trial results. A Bayesian analysis was conducted using Markov chain Monte Carlo procedures. Stochastic models included a financial cost attributed to a change in clinical mastitis following implementation of the control plan. Prior distributions were incorporated that covered a realistic range of possible clinical viewpoints, including scepticism, enthusiasm and uncertainty. Posterior distributions revealed important differences in the financial gain that clinicians with different starting viewpoints would anticipate from the mastitis control plan, given the actual research results. For example, a severe sceptic would ascribe a probability of 0.50 for a return of £20 per cow. Simulations using increased trial sizes indicated that if the original study was four times as large, an initial sceptic would be more convinced about the efficacy of the control plan but would still anticipate less financial return than an initial enthusiast would anticipate after the original study. In conclusion, it is possible to estimate how clinicians’ prior beliefs influence their interpretation of research evidence. Further research on the extent to which different interpretations of evidence result in changes to clinical practice would be worthwhile. PMID:19576643

  3. Geological Controls on Glacier Surging?: Statistics and Speculation

    Science.gov (United States)

    Flowers, G. E.; Crompton, J. W.

    2015-12-01

    Glacier surging represents an end-member behavior in the spectrum of ice dynamics, involving marked acceleration and high flow speeds due to abrupt changes in basal mechanics. Though much effort has been devoted to understanding the role of basal hydrology and thermal regime in fast glacier flow, fewer studies have addressed the potential role of the geologic substrate. One interesting observation is that surge-type glaciers appear almost universally associated with unconsolidated (till) beds, and several large-scale statistical studies have revealed correlations between glacier surging and bedrock properties. We revisit this relationship using field measurements. We selected 20 individual glaciers for sampling in a 40x40 km region of the St. Elias Mountains of Yukon, Canada. Eleven of these glaciers are known to surge and nine are not. The 20 study glaciers are underlain by lithologies that we have broadly classified into two types: metasedimentary only and mixed metasedimentary-granodiorite. We characterized geological and geotechnical properties of the bedrock in each basin, and analyzed the hydrochemistry and mineralogy and grain size distribution (GSD) of the suspended sediments in the proglacial streams. Here we focus on some intriguing results of the GSD analysis. Using statistical techniques, including significance testing and principal component analysis, we find that: (1) lithology determines GSD for non-surge-type glaciers, with metasedimentary basins associated with finer mean grain sizes and mixed-lithology basins with coarser mean grain sizes, but (2) the GSDs associated with surge-type glaciers are intermediate between the distributions described above, and are statistically indistinguishable between metasedimentary and mixed lithology basins. The latter suggests either that surge-type glaciers in our study area occur preferentially in basins where various processes conspire to produce a characteristic GSD, or that the surge cycle itself exerts an

  4. Advances in Bayesian Modeling in Educational Research

    Science.gov (United States)

    Levy, Roy

    2016-01-01

    In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…

  5. Improved statistical method for temperature and salinity quality control

    Science.gov (United States)

    Gourrion, Jérôme; Szekely, Tanguy

    2017-04-01

    Climate research and Ocean monitoring benefit from the continuous development of global in-situ hydrographic networks in the last decades. Apart from the increasing volume of observations available on a large range of temporal and spatial scales, a critical aspect concerns the ability to constantly improve the quality of the datasets. In the context of the Coriolis Dataset for ReAnalysis (CORA) version 4.2, a new quality control method based on a local comparison to historical extreme values ever observed is developed, implemented and validated. Temperature, salinity and potential density validity intervals are directly estimated from minimum and maximum values from an historical reference dataset, rather than from traditional mean and standard deviation estimates. Such an approach avoids strong statistical assumptions on the data distributions such as unimodality, absence of skewness and spatially homogeneous kurtosis. As a new feature, it also allows addressing simultaneously the two main objectives of an automatic quality control strategy, i.e. maximizing the number of good detections while minimizing the number of false alarms. The reference dataset is presently built from the fusion of 1) all ARGO profiles up to late 2015, 2) 3 historical CTD datasets and 3) the Sea Mammals CTD profiles from the MEOP database. All datasets are extensively and manually quality controlled. In this communication, the latest method validation results are also presented. The method has already been implemented in the latest version of the delayed-time CMEMS in-situ dataset and will be deployed soon in the equivalent near-real time products.

  6. Bayesian Statistics in Adjustment of Premium%贝叶斯方法在调整保险费率中的应用

    Institute of Scientific and Technical Information of China (English)

    陈正; 汪飞飞

    2012-01-01

    Adjustment of premium by the situation of the market management is very important for the in- surance company. This paper illustrates Bayesian premium adjusted method by using example analyzes the feasibility of premium valuation under Bayesian premium adjusted method. Both method and conclusions could be applied in small sample premium valuation of non-life insurance practice.%根据市场经营情况适时调整保险费系统对保险公司至关重要。对贝叶斯调整保险费方法进行阐述,运用实例分析说明贝叶斯调整保险费方法估计保险费率的可行性。本文的方法和结论可运用于非寿险实务中小样本数据的保险费估计工作。

  7. Impact Angle Control of Interplanetary Shock Geoeffectiveness: A Statistical Study

    CERN Document Server

    Oliveira, D M

    2015-01-01

    We present a survey of interplanetary (IP) shocks using WIND and ACE satellite data from January 1995 to December 2013 to study how IP shock geoeffectiveness is controlled by IP shock impact angles. A shock list covering one and a half solar cycle is compiled. The yearly number of IP shocks is found to correlate well with the monthly sunspot number. We use data from SuperMAG, a large chain with more than 300 geomagnetic stations, to study geoeffectiveness triggered by IP shocks. The SuperMAG SML index, an enhanced version of the familiar AL index, is used in our statistical analysis. The jumps of the SML index triggered by IP shock impacts on the Earth's magnetosphere is investigated in terms of IP shock orientation and speed. We find that, in general, strong (high speed) and almost frontal (small impact angle) shocks are more geoeffective than inclined shocks with low speed. The strongest correlation (correlation coefficient R = 0.70) occurs for fixed IP shock speed and varying the IP shock impact angle. We ...

  8. Bayesian Inference: with ecological applications

    Science.gov (United States)

    Link, William A.; Barker, Richard J.

    2010-01-01

    This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.

  9. Bayesian analysis of CCDM Models

    OpenAIRE

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2016-01-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, leads to negative creation pressure, which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical tools, at light of SN Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These approaches allow to compare models considering goodness of fit and numbe...

  10. Mindfulness-Based Cognitive Therapy for Trichotillomania: A Bayesian Case-Control Study

    Directory of Open Access Journals (Sweden)

    Alexandre Heeren

    2015-07-01

    Full Text Available Over the last years, mindfulness-based interventions combined with habit reversal training have been demonstrated to be particularly suitable for addressing trichotillomania. However, because these studies always combined mindfulness training to habit reversal without including either a mindfulness or habit reversal condition alone, it is still unclear whether clinical benefits are the consequences of mindfulness or merely result from habit reversal training. The primary purpose of the present study was thus to examine whether a mindfulness training procedure without habit reversal could alleviate trichotillomania. Using a Bayesian probabilistic approach for single-case design, client’s hair loss severity and level of mindfulness were compared to a normative sample (n = 15 before treatment, after treatment, and at six-month follow-up. Improvement in mindfulness first occurred, and that beneficial effect then transferred to hair-pulling. Indeed, as compared to the normative sample, the client exhibited, from baseline to post-treatment, an improvement in mindfulness. Although a marginal trend to improvement was already evidenced at post-treatment, the mindfulness program only had a significant beneficial effect transferred to hair-loss severity at six-month follow-up. Although it remains particularly difficult to infer generalization from one client, the data from the present case study are the first to suggest that mindfulness training per se might be a suitable clinical intervention for trichotillomania.

  11. Application of Bayesian Dynamic Forecast in Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    YAN Hui; CAO Yuanda

    2005-01-01

    A macroscopical anomaly detection method based on intrusion statistic and Bayesian dynamic forecast is presented. A large number of alert data that cannot be dealt with in time are always aggregated in control centers of large-scale intrusion detection systems. In order to improve the efficiency and veracity of intrusion analysis, the intrusion intensity values are picked from alert data and Bayesian dynamic forecast method is used to detect anomaly. The experiments show that the new method is effective on detecting macroscopical anomaly in large-scale intrusion detection systems.

  12. A Bayesian framework to assess the potential for controlling classical scrapie in sheep flocks using a live diagnostic test.

    Science.gov (United States)

    Gryspeirt, Aiko; Gubbins, Simon

    2013-09-01

    Current strategies to control classical scrapie remove animals at risk of scrapie rather than those known to be infected with the scrapie agent. Advances in diagnostic tests, however, suggest that a more targeted approach involving the application of a rapid live test may be feasible in future. Here we consider the use of two diagnostic tests: recto-anal mucosa-associated lymphatic tissue (RAMALT) biopsies; and a blood-based assay. To assess their impact we developed a stochastic age- and prion protein (PrP) genotype-structured model for the dynamics of scrapie within a sheep flock. Parameters were estimated in a Bayesian framework to facilitate integration of a number of disparate datasets and to allow parameter uncertainty to be incorporated in model predictions. In small flocks a control strategy based on removal of clinical cases was sufficient to control disease and more stringent measures (including the use of a live diagnostic test) did not significantly reduce outbreak size or duration. In medium or large flocks strategies in which a large proportion of animals are tested with either live diagnostic test significantly reduced outbreak size, but not always duration, compared with removal of clinical cases. However, the current Compulsory Scrapie Flocks Scheme (CSFS) significantly reduced outbreak size and duration compared with both removal of clinical cases and all strategies using a live diagnostic test. Accordingly, under the assumptions made in the present study there is little benefit from implementing a control strategy which makes use of a live diagnostic test.

  13. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  14. Bayesian SPLDA

    OpenAIRE

    Villalba, Jesús

    2015-01-01

    In this document we are going to derive the equations needed to implement a Variational Bayes estimation of the parameters of the simplified probabilistic linear discriminant analysis (SPLDA) model. This can be used to adapt SPLDA from one database to another with few development data or to implement the fully Bayesian recipe. Our approach is similar to Bishop's VB PPCA.

  15. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    Directory of Open Access Journals (Sweden)

    Adrion Christine

    2012-09-01

    Full Text Available Abstract Background A statistical analysis plan (SAP is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs. The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC or probability integral transform (PIT, and by using proper scoring rules (e.g. the logarithmic score. Results The instruments under study

  16. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  17. A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.

    Science.gov (United States)

    Westgard, James O

    2017-03-01

    A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements.

  18. A Goal-Directed Bayesian Framework for Categorization

    Science.gov (United States)

    Rigoli, Francesco; Pezzulo, Giovanni; Dolan, Raymond; Friston, Karl

    2017-01-01

    Categorization is a fundamental ability for efficient behavioral control. It allows organisms to remember the correct responses to categorical cues and not for every stimulus encountered (hence eluding computational cost or complexity), and to generalize appropriate responses to novel stimuli dependant on category assignment. Assuming the brain performs Bayesian inference, based on a generative model of the external world and future goals, we propose a computational model of categorization in which important properties emerge. These properties comprise the ability to infer latent causes of sensory experience, a hierarchical organization of latent causes, and an explicit inclusion of context and action representations. Crucially, these aspects derive from considering the environmental statistics that are relevant to achieve goals, and from the fundamental Bayesian principle that any generative model should be preferred over alternative models based on an accuracy-complexity trade-off. Our account is a step toward elucidating computational principles of categorization and its role within the Bayesian brain hypothesis.

  19. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  20. Application of Bayesian statistical decision theory to the optimization of generating set maintenance; Application de la theorie de decision statistique bayesienne a l`optimisation de la maintenance des groupes electrogenes

    Energy Technology Data Exchange (ETDEWEB)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab.

  1. Reaming process improvement and control: An application of statistical engineering

    DEFF Research Database (Denmark)

    Müller, Pavel; Genta, G.; Barbato, G.

    2012-01-01

    A reaming operation had to be performed within given technological and economical constraints. Process improvement under realistic conditions was the goal of a statistical engineering project, supported by a comprehensive experimental investigation providing detailed information on single...... and combined effects of several parameters on key responses. Results supported selection of production parameters meeting specified quality and cost targets, as well as substantial improvements....

  2. Bayesian signaling

    OpenAIRE

    Hedlund, Jonas

    2014-01-01

    This paper introduces private sender information into a sender-receiver game of Bayesian persuasion with monotonic sender preferences. I derive properties of increasing differences related to the precision of signals and use these to fully characterize the set of equilibria robust to the intuitive criterion. In particular, all such equilibria are either separating, i.e., the sender's choice of signal reveals his private information to the receiver, or fully disclosing, i.e., the outcome of th...

  3. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  4. Use of statistical process control in the production of blood components

    DEFF Research Database (Denmark)

    Magnussen, K; Quere, S; Winkel, P

    2008-01-01

    Introduction of statistical process control in the setting of a small blood centre was tested, both on the regular red blood cell production and specifically to test if a difference was seen in the quality of the platelets produced, when a change was made from a relatively large inexperienced...... by an experienced staff with four technologists. We applied statistical process control to examine if time series of quality control values were in statistical control. Leucocyte count in red blood cells was out of statistical control. Platelet concentration and volume of the platelets produced by the occasional...

  5. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  6. Computational Advances for and from Bayesian Analysis

    OpenAIRE

    Andrieu, C.; Doucet, A.; Robert, C. P.

    2004-01-01

    The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistical computation are intermingled.

  7. Statistical Process Control in a Modern Production Environment

    DEFF Research Database (Denmark)

    Windfeldt, Gitte Bjørg

    gathered here and standard statistical software. In Paper 2 a new method for process monitoring is introduced. The method uses a statistical model of the quality characteristic and a sliding window of observations to estimate the probability that the next item will not respect the specications....... If the estimated probability exceeds a pre-determined threshold the process will be stopped. The method is exible, allowing a complexity in modeling that remains invisible to the end user. Furthermore, the method allows to build diagnostic plots based on the parameters estimates that can provide valuable insight...... into the process. The method is explored numerically and a case study is provided. In Paper 3 the method is explored in a bivariate setting. Paper 4 is a case study on a problem regarding missing values in an industrial process. The impact of the missing values on the quality measures of the process is assessed...

  8. Controle estatístico do processo em pintura industrial = Process statistics control in industrial painting

    Directory of Open Access Journals (Sweden)

    Valentina de Lourdes Milani de Paula Soares

    2006-07-01

    Full Text Available Este trabalho teve por objetivo a aplicação das Ferramentas do Controle Estatístico em um setor de uma Indústria de Transformador. O característico de qualidade selecionada para o estudo, foi a espessura de tinta no transformador em suas várias partes (Corpo, Radiador, Tampa e Suporte. Determinado o setor e característico de qualidade, uma folha de verificação foi utilizada para a coleta dos dados, seguida pelo fluxograma, sugerindo uma nova metodologia para essa linha de produção. A metodologia proposta teve por base um Modelo apresentado por Soares (2001, p. 66, com modificações necessárias para se adequar a empresa em estudo. Para interpretação dos dados, utilizou os Gráficos de Controle X e AM, nas diversas partes do transformador. Da análise dos dados coletados resultaram vários planos de ação de melhoria no processo, culminando, em alguns casos, no controle estatístico do mesmo. Ao obter o controleestatístico do processo, estabeleceram-se os limites de controle que permitirão monitorar daqui para frente tais processos, bem como calcular seus índices de capacidade. A análise, na qual diagnosticou-se a permanência do processo fora de controle, será necessário continuar, com estudo das causas da variabilidade.This paper aimed at applying Statistics Control Tools in a Transformer Industry sector. The quality characteristic selected to this study was the thickness of painting in several parts of thetransformer (body, radiator, cover and support. After selecting the sector and the quality characteristic, a sheet of verification was used to collect the data. A new methodology was suggested to this production line. The methodology proposed has as base a Modelpresented by Soares (2001, p. 66, with necessary modifications to fit to the company under study. The data were analyzed by X and AM Control Graphs in several parts of the transformer. Many improvement plans resulted from the analysis and some of them in the

  9. 3rd Bayesian Young Statisticians Meeting

    CERN Document Server

    Lanzarone, Ettore; Villalobos, Isadora; Mattei, Alessandra

    2017-01-01

    This book is a selection of peer-reviewed contributions presented at the third Bayesian Young Statisticians Meeting, BAYSM 2016, Florence, Italy, June 19-21. The meeting provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and postdocs dealing with Bayesian statistics to connect with the Bayesian community at large, to exchange ideas, and to network with others working in the same field. The contributions develop and apply Bayesian methods in a variety of fields, ranging from the traditional (e.g., biostatistics and reliability) to the most innovative ones (e.g., big data and networks).

  10. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  11. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t

  12. Prior approval: the growth of Bayesian methods in psychology.

    Science.gov (United States)

    Andrews, Mark; Baguley, Thom

    2013-02-01

    Within the last few years, Bayesian methods of data analysis in psychology have proliferated. In this paper, we briefly review the history or the Bayesian approach to statistics, and consider the implications that Bayesian methods have for the theory and practice of data analysis in psychology.

  13. Statistical process control (SPC) for coordinate measurement machines

    Energy Technology Data Exchange (ETDEWEB)

    Escher, R.N.

    2000-01-04

    The application of process capability analysis, using designed experiments, and gage capability studies as they apply to coordinate measurement machine (CMM) uncertainty analysis and control will be demonstrated. The use of control standards in designed experiments, and the use of range charts and moving range charts to separate measurement error into it's discrete components will be discussed. The method used to monitor and analyze the components of repeatability and reproducibility will be presented with specific emphasis on how to use control charts to determine and monitor CMM performance and capability, and stay within your uncertainty assumptions.

  14. Medicaid Fraud Control Units (MFCU) Annual Spending and Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — Medicaid Fraud Control Units (MFCU or Unit) investigate and prosecute Medicaid fraud as well as patient abuse and neglect in health care facilities. OIG certifies,...

  15. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  16. Structure learning for Bayesian networks as models of biological networks.

    Science.gov (United States)

    Larjo, Antti; Shmulevich, Ilya; Lähdesmäki, Harri

    2013-01-01

    Bayesian networks are probabilistic graphical models suitable for modeling several kinds of biological systems. In many cases, the structure of a Bayesian network represents causal molecular mechanisms or statistical associations of the underlying system. Bayesian networks have been applied, for example, for inferring the structure of many biological networks from experimental data. We present some recent progress in learning the structure of static and dynamic Bayesian networks from data.

  17. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...... economics, with careful controls for the confounding effects of risk aversion. Our results show that risk aversion significantly alters inferences on deviations from Bayes’ Rule....

  18. Reduced Bayesian Inversion

    OpenAIRE

    Himpe, Christian; Ohlberger, Mario

    2014-01-01

    Bayesian inversion of models with large state and parameter spaces proves to be computationally complex. A combined state and parameter reduction can significantly decrease the computational time and cost required for the parameter estimation. The presented technique is based on the well-known balanced truncation approach. Classically, the balancing of the controllability and observability gramians allows a truncation of discardable states. Here the underlying model, being a linear or nonline...

  19. Bayesian Analysis of Individual Level Personality Dynamics

    Directory of Open Access Journals (Sweden)

    Edward Cripps

    2016-07-01

    Full Text Available A Bayesian technique with analyses of within-person processes at the level of the individual is presented. The approach is used to examine if the patterns of within-person responses on a 12 trial simulation task are consistent with the predictions of ITA theory (Dweck, 1999. ITA theory states that the performance of an individual with an entity theory of ability is more likely to spiral down following a failure experience than the performance of an individual with an incremental theory of ability. This is because entity theorists interpret failure experiences as evidence of a lack of ability, which they believe is largely innate and therefore relatively fixed; whilst incremental theorists believe in the malleability of abilities and interpret failure experiences as evidence of more controllable factors such as poor strategy or lack of effort. The results of our analyses support ITA theory at both the within- and between-person levels of analyses and demonstrate the benefits of Bayesian techniques for the analysis of within-person processes. These include more formal specification of the theory and the ability to draw inferences about each individual, which allows for more nuanced interpretations of individuals within a personality category, such as differences in the individual probabilities of spiralling. While Bayesian techniques have many potential advantages for the analyses of within-person processes at the individual level, ease of use is not one of them for psychologists trained in traditional frequentist statistical techniques.

  20. Application of Fragment Ion Information as Further Evidence in Probabilistic Compound Screening Using Bayesian Statistics and Machine Learning: A Leap Toward Automation.

    Science.gov (United States)

    Woldegebriel, Michael; Zomer, Paul; Mol, Hans G J; Vivó-Truyols, Gabriel

    2016-08-02

    In this work, we introduce an automated, efficient, and elegant model to combine all pieces of evidence (e.g., expected retention times, peak shapes, isotope distributions, fragment-to-parent ratio) obtained from liquid chromatography-tandem mass spectrometry (LC-MS/MS/MS) data for screening purposes. Combining all these pieces of evidence requires a careful assessment of the uncertainties in the analytical system as well as all possible outcomes. To-date, the majority of the existing algorithms are highly dependent on user input parameters. Additionally, the screening process is tackled as a deterministic problem. In this work we present a Bayesian framework to deal with the combination of all these pieces of evidence. Contrary to conventional algorithms, the information is treated in a probabilistic way, and a final probability assessment of the presence/absence of a compound feature is computed. Additionally, all the necessary parameters except the chromatographic band broadening for the method are learned from the data in training and learning phase of the algorithm, avoiding the introduction of a large number of user-defined parameters. The proposed method was validated with a large data set and has shown improved sensitivity and specificity in comparison to a threshold-based commercial software package.

  1. A semiparametric Wald statistic for testing logistic regression models based on case-control data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    We propose a semiparametric Wald statistic to test the validity of logistic regression models based on case-control data. The test statistic is constructed using a semiparametric ROC curve estimator and a nonparametric ROC curve estimator. The statistic has an asymptotic chi-squared distribution and is an alternative to the Kolmogorov-Smirnov-type statistic proposed by Qin and Zhang in 1997, the chi-squared-type statistic proposed by Zhang in 1999 and the information matrix test statistic proposed by Zhang in 2001. The statistic is easy to compute in the sense that it requires none of the following methods: using a bootstrap method to find its critical values, partitioning the sample data or inverting a high-dimensional matrix. We present some results on simulation and on analysis of two real examples. Moreover, we discuss how to extend our statistic to a family of statistics and how to construct its Kolmogorov-Smirnov counterpart.

  2. Bayesian stable isotope mixing models

    Science.gov (United States)

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...

  3. Statistical Data Mining for Efficient Quality Control in Manufacturing

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben Steen

    2015-01-01

    of the process e.g sensor measurements, machine readings etc, and the major contributor of these big data sets are different quality control processes. In this article we will present methodology to extract valuable insight from manufacturing data. The proposed methodology is based on comparison of probabilities...

  4. Statistical Process Control for Evaluating Contract Service at Army Installations

    Science.gov (United States)

    1990-09-01

    Technical Information Service, 5285 Port Royal Road, Springfield, VA 22161 12a. DISTRIBUTION/ AVAILABILTY STATEMENT 12b. DISTRIBUTION CODE Approved for...requirements. In addition to their usage in fault diagnosis and process improvement, process control methods are recommended for supporting acceptance

  5. Statistical Process Control Charts for Public Health Monitoring

    Science.gov (United States)

    2014-12-01

    Poisson counts) [21-23].  Cumulative sum ( CUSUM ) and exponentially weighted moving average (EWMA) control charts are often used with Phase II data. These...charts have been shown to more quickly detect small changes than traditional Shewhart charts. There have been several applications of CUSUM charts in...distribution, a CUSUM or EWMA chart would be required.  Risk adjustment for health data has been applied when monitoring variables that can be

  6. A Statistical Approach for Obtaining the Controlled Woven Fabric Width

    Directory of Open Access Journals (Sweden)

    Shaker Khubab

    2015-12-01

    Full Text Available A common problem faced in fabric manufacturing is the production of inconsistent fabric width on shuttleless looms in spite of the same fabric specifications. Weft-wise crimp controls the fabric width and it depends on a number of factors, including warp tension, temple type, fabric take-up pressing tension and loom working width. The aim of this study is to investigate the effect of these parameters on the fabric width produced. Taguchi’s orthogonal design was used to optimise the weaving parameters for obtaining controlled fabric width. On the basis of signal to noise ratios, it could be concluded that controlled fabric width could be produced using medium temple type and intense take-up pressing tension at relatively lower warp tension and smaller loom working width. The analysis of variance revealed that temple needle size was the most significant factor affecting the fabric width, followed by loom working width and warp tension, whereas take-up pressing tension was least significant of all the factors investigated in the study.

  7. Perception, illusions and Bayesian inference.

    Science.gov (United States)

    Nour, Matthew M; Nour, Joseph M

    2015-01-01

    Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.

  8. Machining Error Control by Integrating Multivariate Statistical Process Control and Stream of Variations Methodology

    Institute of Scientific and Technical Information of China (English)

    WANG Pei; ZHANG Dinghua; LI Shan; CHEN Bing

    2012-01-01

    For aircraft manufacturing industries,the analyses and prediction of part machining error during machining process are very important to control and improve part machining quality.In order to effectively control machining error,the method of integrating multivariate statistical process control (MSPC) and stream of variations (SoV) is proposed.Firstly,machining error is modeled by multi-operation approaches for part machining process.SoV is adopted to establish the mathematic model of the relationship between the error of upstream operations and the error of downstream operations.Here error sources not only include the influence of upstream operations but also include many of other error sources.The standard model and the predicted model about SoV are built respectively by whether the operation is done or not to satisfy different requests during part machining process.Secondly,the method of one-step ahead forecast error (OSFE) is used to eliminate autocorrelativity of the sample data from the SoV model,and the T2 control chart in MSPC is built to realize machining error detection according to the data characteristics of the above error model,which can judge whether the operation is out of control or not.If it is,then feedback is sent to the operations.The error model is modified by adjusting the operation out of control,and continually it is used to monitor operations.Finally,a machining instance containing two operations demonstrates the effectiveness of the machining error control method presented in this paper.

  9. GASP cloud encounter statistics - Implications for laminar flow control flight

    Science.gov (United States)

    Jasperson, W. H.; Nastrom, G. D.; Davis, R. E.; Holdeman, J. D.

    1984-01-01

    The cloud observation archive from the NASA Global Atmospheric Sampling Program (GASP) is analyzed in order to derive the probability of cloud encounter at altitudes normally flown by commercial airliners, for application to a determination of the feasability of Laminar Flow Control (LFC) on long-range routes. The probability of cloud encounter is found to vary significantly with season. Several meteorological circulation features are apparent in the latitudinal distribution of cloud cover. The cloud encounter data are shown to be consistent with the classical midlatitude cyclone model with more clouds encountered in highs than in lows. Aircraft measurements of route-averaged time-in-clouds fit a gamma probability distribution model which is applied to estimate the probability of extended cloud encounter, and the associated loss of LFC effectiveness along seven high-density routes. The probability is demonstrated to be low.

  10. Use of statistical process control in evaluation of academic performance

    Directory of Open Access Journals (Sweden)

    Ezequiel Gibbon Gautério

    2014-05-01

    Full Text Available The aim of this article was to study some indicators of academic performance (number of students per class, dropout rate, failure rate and scores obtained by the students to identify a pattern of behavior that would enable to implement improvements in the teaching-learning process. The sample was composed of five classes of undergraduate courses in Engineering. The data were collected for three years. Initially an exploratory analysis with analytical and graphical techniques was performed. An analysis of variance and Tukey’s test investigated some sources of variability. This information was used in the construction of control charts. We have found evidence that classes with more students are associated with higher failure rates and lower mean. Moreover, when the course was later in the curriculum, the students had higher scores. The results showed that although they have been detected some special causes interfering in the process, it was possible to stabilize it and to monitor it.

  11. Multisensor-multitarget sensor management: a unified Bayesian approach

    Science.gov (United States)

    Mahler, Ronald P. S.

    2003-08-01

    Multisensor-multitarget sensor management is at root a problem in nonlinear control theory. This paper develops a potentially computationally tractable approximation of an earlier (1996) Bayesian control-theoretic foundation for sensor management based on "finite-set statistics" (FISST) and the Bayes recursive filter for the entire multisensor-multitarget system. I analyze possible Bayesian control-theoretic objective functions: Csiszar information-theoretic functionals (which generalize Kullback-Leibler discrimination) and "geometric" functionals. I show that some of these objective functions lead to potentially tractable sensor management algorithms when used in conjunction with MHC (multi-hypothesis correlator)-like algorithms. I also take this opportunity to comment on recent misrepresentations of FISST involving so-called "joint multitarget probabilities (JMP).".

  12. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    Directory of Open Access Journals (Sweden)

    Dumičić Ksenija

    2015-03-01

    Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

  13. Statistical applications for chemistry, manufacturing and controls (CMC) in the pharmaceutical industry

    CERN Document Server

    Burdick, Richard K; Pfahler, Lori B; Quiroz, Jorge; Sidor, Leslie; Vukovinsky, Kimberly; Zhang, Lanju

    2017-01-01

    This book examines statistical techniques that are critically important to Chemistry, Manufacturing, and Control (CMC) activities. Statistical methods are presented with a focus on applications unique to the CMC in the pharmaceutical industry. The target audience consists of statisticians and other scientists who are responsible for performing statistical analyses within a CMC environment. Basic statistical concepts are addressed in Chapter 2 followed by applications to specific topics related to development and manufacturing. The mathematical level assumes an elementary understanding of statistical methods. The ability to use Excel or statistical packages such as Minitab, JMP, SAS, or R will provide more value to the reader. The motivation for this book came from an American Association of Pharmaceutical Scientists (AAPS) short course on statistical methods applied to CMC applications presented by four of the authors. One of the course participants asked us for a good reference book, and the only book recomm...

  14. Towards Bayesian Inference of the Fast-Ion Distribution Function

    DEFF Research Database (Denmark)

    Stagner, L.; Heidbrink, W.W.; Salewski, Mirko

    2012-01-01

    . However, when theory and experiment disagree (for one or more diagnostics), it is unclear how to proceed. Bayesian statistics provides a framework to infer the DF, quantify errors, and reconcile discrepant diagnostic measurements. Diagnostic errors and ``weight functions" that describe the phase space...... sensitivity of the measurements are incorporated into Bayesian likelihood probabilities, while prior probabilities enforce physical constraints. As an initial step, this poster uses Bayesian statistics to infer the DIII-D electron density profile from multiple diagnostic measurements. Likelihood functions...

  15. Measurement of Work Processes Using Statistical Process Control: Instructor’s Manual

    Science.gov (United States)

    1987-03-01

    Iron Age, 22. (29), 59, 61, 63. Box, G. E. P., & Draper, N. R. (1969). Evolutionary operation. New York: John Wiley. Box, G. E. P., Hunter, W. G...Hunter, 3. S. (1978). Statistics for experimenters. New York: John Wiley. tv Brittanica Films (Producers). Management’s five deadly diseases (videotape...Fundamentals of statistical quality control seminar. Beaverton, OR: Author. Terninko , J. (1983). Statistical aplications in automotive urethane

  16. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  17. SU-E-T-144: Bayesian Inference of Local Relapse Data Using a Poisson-Based Tumour Control Probability Model

    Energy Technology Data Exchange (ETDEWEB)

    La Russa, D [The Ottawa Hospital Cancer Centre, Ottawa, ON (Canada)

    2015-06-15

    Purpose: The purpose of this project is to develop a robust method of parameter estimation for a Poisson-based TCP model using Bayesian inference. Methods: Bayesian inference was performed using the PyMC3 probabilistic programming framework written in Python. A Poisson-based TCP regression model that accounts for clonogen proliferation was fit to observed rates of local relapse as a function of equivalent dose in 2 Gy fractions for a population of 623 stage-I non-small-cell lung cancer patients. The Slice Markov Chain Monte Carlo sampling algorithm was used to sample the posterior distributions, and was initiated using the maximum of the posterior distributions found by optimization. The calculation of TCP with each sample step required integration over the free parameter α, which was performed using an adaptive 24-point Gauss-Legendre quadrature. Convergence was verified via inspection of the trace plot and posterior distribution for each of the fit parameters, as well as with comparisons of the most probable parameter values with their respective maximum likelihood estimates. Results: Posterior distributions for α, the standard deviation of α (σ), the average tumour cell-doubling time (Td), and the repopulation delay time (Tk), were generated assuming α/β = 10 Gy, and a fixed clonogen density of 10{sup 7} cm−{sup 3}. Posterior predictive plots generated from samples from these posterior distributions are in excellent agreement with the observed rates of local relapse used in the Bayesian inference. The most probable values of the model parameters also agree well with maximum likelihood estimates. Conclusion: A robust method of performing Bayesian inference of TCP data using a complex TCP model has been established.

  18. 77 FR 46096 - Statistical Process Controls for Blood Establishments; Public Workshop

    Science.gov (United States)

    2012-08-02

    ...-available basis beginning at 7:30 a.m. If you need special accommodations due to a disability, please... workshop is to educate participants on statistical process control theory and options for the...

  19. Experience in statistical quality control for road construction in South Africa

    CSIR Research Space (South Africa)

    Mitchell, MF

    1977-06-01

    Full Text Available of statistically oriented acceptance control procedures to a major road construction project is examined and it is concluded that such procedures promise to be of benefit to both the client and the contractor....

  20. REGULATION AND ROLLING QUALITY CONTROL ON THE BASIS OF STATISTICAL METHODS

    Directory of Open Access Journals (Sweden)

    A. N. Polobovets

    2009-01-01

    Full Text Available It is shown that introduction of statistical method of control will allow to reduce efforts for production, delivery of the samples to the laboratory of mechanical testing, and to reduce the expenses as well.

  1. What Is the Probability You Are a Bayesian?

    Science.gov (United States)

    Wulff, Shaun S.; Robinson, Timothy J.

    2014-01-01

    Bayesian methodology continues to be widely used in statistical applications. As a result, it is increasingly important to introduce students to Bayesian thinking at early stages in their mathematics and statistics education. While many students in upper level probability courses can recite the differences in the Frequentist and Bayesian…

  2. Quality control of high-dose-rate brachytherapy: treatment delivery analysis using statistical process control.

    Science.gov (United States)

    Able, Charles M; Bright, Megan; Frizzell, Bart

    2013-03-01

    Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Quality Control of High-Dose-Rate Brachytherapy: Treatment Delivery Analysis Using Statistical Process Control

    Energy Technology Data Exchange (ETDEWEB)

    Able, Charles M., E-mail: cable@wfubmc.edu [Department of Radiation Oncology, Wake Forest School of Medicine, Winston-Salem, North Carolina (United States); Bright, Megan; Frizzell, Bart [Department of Radiation Oncology, Wake Forest School of Medicine, Winston-Salem, North Carolina (United States)

    2013-03-01

    Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles with 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.

  4. Development of nuclear power plant online monitoring system using statistical quality control

    Energy Technology Data Exchange (ETDEWEB)

    An, Sang Ha

    2006-02-15

    Statistical Quality Control techniques have been applied to many aspects of industrial engineering. An application to nuclear power plant maintenance and control is also presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCP) and the fouling resistance of heat exchanger. This research uses Shewart X-bar, R charts, Cumulative Sum charts (CUSUM), and Sequential Probability Ratio Test (SPRT) to analyze the process for the state of statistical control. And we made Control Chart Analyzer (CCA) to support these analyses that can make a decision of error in process. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with enough time to respond to possible emergency situations and thus improve plant safety and reliability.

  5. Parameter Control of Genetic Algorithms by Learning and Simulation of Bayesian Networks——A Case Study for the Optimal Ordering of Tables

    Institute of Scientific and Technical Information of China (English)

    Concha Bielza; Juan A.Fernández del Pozo; Pedro Larra(n)aga

    2013-01-01

    Parameter setting for evolutionary algorithms is still an important issue in evolutionary computation.There are two main approaches to parameter setting:parameter tuning and parameter control.In this paper,we introduce self-adaptive parameter control of a genetic algorithm based on Bayesian network learning and simulation.The nodes of this Bayesian network are genetic algorithm parameters to be controlled.Its structure captures probabilistic conditional (in)dependence relationships between the parameters.They are learned from the best individuals,i.e.,the best configurations of the genetic algorithm.Individuals are evaluated by running the genetic algorithm for the respective parameter configuration.Since all these runs are time-consuming tasks,each genetic algorithm uses a small-sized population and is stopped before convergence.In this way promising individuals should not be lost.Experiments with an optimal search problem for simultaneous row and column orderings yield the same optima as state-of-the-art methods but with a sharp reduction in computational time.Moreover,our approach can cope with as yet unsolved high-dimensional problems.

  6. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  7. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2003-01-01

    As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.

  8. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...... is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  9. Bayesian analysis for the social sciences

    CERN Document Server

    Jackman, Simon

    2009-01-01

    Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.

  10. Toward improved prediction of the bedrock depth underneath hillslopes: Bayesian inference of the bottom-up control hypothesis using high-resolution topographic data

    Science.gov (United States)

    Gomes, Guilherme J. C.; Vrugt, Jasper A.; Vargas, Eurípedes A.

    2016-04-01

    The depth to bedrock controls a myriad of processes by influencing subsurface flow paths, erosion rates, soil moisture, and water uptake by plant roots. As hillslope interiors are very difficult and costly to illuminate and access, the topography of the bedrock surface is largely unknown. This essay is concerned with the prediction of spatial patterns in the depth to bedrock (DTB) using high-resolution topographic data, numerical modeling, and Bayesian analysis. Our DTB model builds on the bottom-up control on fresh-bedrock topography hypothesis of Rempe and Dietrich (2014) and includes a mass movement and bedrock-valley morphology term to extent the usefulness and general applicability of the model. We reconcile the DTB model with field observations using Bayesian analysis with the DREAM algorithm. We investigate explicitly the benefits of using spatially distributed parameter values to account implicitly, and in a relatively simple way, for rock mass heterogeneities that are very difficult, if not impossible, to characterize adequately in the field. We illustrate our method using an artificial data set of bedrock depth observations and then evaluate our DTB model with real-world data collected at the Papagaio river basin in Rio de Janeiro, Brazil. Our results demonstrate that the DTB model predicts accurately the observed bedrock depth data. The posterior mean DTB simulation is shown to be in good agreement with the measured data. The posterior prediction uncertainty of the DTB model can be propagated forward through hydromechanical models to derive probabilistic estimates of factors of safety.

  11. Applied Bayesian Hierarchical Methods

    CERN Document Server

    Congdon, Peter D

    2010-01-01

    Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.

  12. Initial investigation using statistical process control for quality control of accelerator beam steering

    Directory of Open Access Journals (Sweden)

    Able Charles M

    2011-12-01

    Full Text Available Abstract Background This study seeks to increase clinical operational efficiency and accelerator beam consistency by retrospectively investigating the application of statistical process control (SPC to linear accelerator beam steering parameters to determine the utility of such a methodology in detecting changes prior to equipment failure (interlocks actuated. Methods Steering coil currents (SCC for the transverse and radial planes are set such that a reproducibly useful photon or electron beam is available. SCC are sampled and stored in the control console computer each day during the morning warm-up. The transverse and radial - positioning and angle SCC for photon beam energies were evaluated using average and range (Xbar-R process control charts (PCC. The weekly average and range values (subgroup n = 5 for each steering coil were used to develop the PCC. SCC from September 2009 (annual calibration until two weeks following a beam steering failure in June 2010 were evaluated. PCC limits were calculated using the first twenty subgroups. Appropriate action limits were developed using conventional SPC guidelines. Results PCC high-alarm action limit was set at 6 standard deviations from the mean. A value exceeding this limit would require beam scanning and evaluation by the physicist and engineer. Two low alarms were used to indicate negative trends. Alarms received following establishment of limits (week 20 are indicative of a non-random cause for deviation (Xbar chart and/or an uncontrolled process (R chart. Transverse angle SCC for 6 MV and 15 MV indicated a high-alarm 90 and 108 days prior to equipment failure respectively. A downward trend in this parameter continued, with high-alarm, until failure. Transverse position and radial angle SCC for 6 and 15 MV indicated low-alarms starting as early as 124 and 116 days prior to failure, respectively. Conclusion Radiotherapy clinical efficiency and accelerator beam consistency may be improved by

  13. A semiparametric Wald statistic for testing logistic regression models based on case-control data

    Institute of Scientific and Technical Information of China (English)

    WAN ShuWen

    2008-01-01

    We propose a semiparametric Wald statistic to test the validity of logistic regression models based on case-control data.The test statistic is constructed using a semiparametric ROC curve estimator and a nonparametric ROC curve estimator.The statistic has an asymptotic chi-squared distribution and is an alternative to the Kolmogorov-Smirnov-type statistic proposed by Qin and Zhang in 1997,the chi-squared-type statistic proposed by Zhang in 1999 and the information matrix test statistic proposed by Zhang in 2001.The statistic is easy to compute in the sense that it requires none of the following methods:using a bootstrap method to find its critical values,partitioning the sample data or inverting a high-dimensional matrix.We present some results on simulation and on analysis of two real examples.Moreover,we discuss how to extend our statistic to a family of statistics and how to construct its Kolmogorov-Smirnov counterpart.

  14. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  15. A practical statistical quality control scheme for the industrial hygiene chemistry laboratory.

    Science.gov (United States)

    Burkart, J A; Eggenberger, L M; Nelson, J H; Nicholson, P R

    1984-06-01

    A computerized statistical quality control system has been developed for use in the industrial hygiene chemistry laboratory. The system is practical and sufficiently flexible to allow for multiple analytes, concentrations, replicate sizes and sample types. The computerized system provides an immediate evaluation of the quality of analytical results and produces automatically simple but informative accuracy and precision quality control charts.

  16. [Application of multivariate statistical analysis and thinking in quality control of Chinese medicine].

    Science.gov (United States)

    Liu, Na; Li, Jun; Li, Bao-Guo

    2014-11-01

    The study of quality control of Chinese medicine has always been the hot and the difficulty spot of the development of traditional Chinese medicine (TCM), which is also one of the key problems restricting the modernization and internationalization of Chinese medicine. Multivariate statistical analysis is an analytical method which is suitable for the analysis of characteristics of TCM. It has been used widely in the study of quality control of TCM. Multivariate Statistical analysis was used for multivariate indicators and variables that appeared in the study of quality control and had certain correlation between each other, to find out the hidden law or the relationship between the data can be found,.which could apply to serve the decision-making and realize the effective quality evaluation of TCM. In this paper, the application of multivariate statistical analysis in the quality control of Chinese medicine was summarized, which could provided the basis for its further study.

  17. Bayesian analysis for kaon photoproduction

    Energy Technology Data Exchange (ETDEWEB)

    Marsainy, T., E-mail: tmart@fisika.ui.ac.id; Mart, T., E-mail: tmart@fisika.ui.ac.id [Department Fisika, FMIPA, Universitas Indonesia, Depok 16424 (Indonesia)

    2014-09-25

    We have investigated contribution of the nucleon resonances in the kaon photoproduction process by using an established statistical decision making method, i.e. the Bayesian method. This method does not only evaluate the model over its entire parameter space, but also takes the prior information and experimental data into account. The result indicates that certain resonances have larger probabilities to contribute to the process.

  18. Bayesian Causal Induction

    CERN Document Server

    Ortega, Pedro A

    2011-01-01

    Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans use strong prior knowledge; and rather than encoding hard causal relationships, they encode beliefs over causal structures, allowing for sound generalization from the observations they obtain from directly acting in the world. In this work we propose a Bayesian approach to causal induction which allows modeling beliefs over multiple causal hypotheses and predicting the behavior of the world under causal interventions. We then illustrate how this method extracts causal information from data containing interventions and observations.

  19. Management of Uncertainty by Statistical Process Control and a Genetic Tuned Fuzzy System

    Directory of Open Access Journals (Sweden)

    Stephan Birle

    2016-01-01

    Full Text Available In food industry, bioprocesses like fermentation often are a crucial part of the manufacturing process and decisive for the final product quality. In general, they are characterized by highly nonlinear dynamics and uncertainties that make it difficult to control these processes by the use of traditional control techniques. In this context, fuzzy logic controllers offer quite a straightforward way to control processes that are affected by nonlinear behavior and uncertain process knowledge. However, in order to maintain process safety and product quality it is necessary to specify the controller performance and to tune the controller parameters. In this work, an approach is presented to establish an intelligent control system for oxidoreductive yeast propagation as a representative process biased by the aforementioned uncertainties. The presented approach is based on statistical process control and fuzzy logic feedback control. As the cognitive uncertainty among different experts about the limits that define the control performance as still acceptable may differ a lot, a data-driven design method is performed. Based upon a historic data pool statistical process corridors are derived for the controller inputs control error and change in control error. This approach follows the hypothesis that if the control performance criteria stay within predefined statistical boundaries, the final process state meets the required quality definition. In order to keep the process on its optimal growth trajectory (model based reference trajectory a fuzzy logic controller is used that alternates the process temperature. Additionally, in order to stay within the process corridors, a genetic algorithm was applied to tune the input and output fuzzy sets of a preliminarily parameterized fuzzy controller. The presented experimental results show that the genetic tuned fuzzy controller is able to keep the process within its allowed limits. The average absolute error to the

  20. Bayesian structural equation modeling in sport and exercise psychology.

    Science.gov (United States)

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  1. Matched case-control studies: a review of reported statistical methodology

    Directory of Open Access Journals (Sweden)

    Niven DJ

    2012-04-01

    Full Text Available Daniel J Niven1, Luc R Berthiaume2, Gordon H Fick1, Kevin B Laupland11Department of Critical Care Medicine, Peter Lougheed Centre, Calgary, 2Department of Community Health Sciences, University of Calgary, Calgary, Alberta, CanadaBackground: Case-control studies are a common and efficient means of studying rare diseases or illnesses with long latency periods. Matching of cases and controls is frequently employed to control the effects of known potential confounding variables. The analysis of matched data requires specific statistical methods.Methods: The objective of this study was to determine the proportion of published, peer reviewed matched case-control studies that used statistical methods appropriate for matched data. Using a comprehensive set of search criteria we identified 37 matched case-control studies for detailed analysis.Results: Among these 37 articles, only 16 studies were analyzed with proper statistical techniques (43%. Studies that were properly analyzed were more likely to have included case patients with cancer and cardiovascular disease compared to those that did not use proper statistics (10/16 or 63%, versus 5/21 or 24%, P = 0.02. They were also more likely to have matched multiple controls for each case (14/16 or 88%, versus 13/21 or 62%, P = 0.08. In addition, studies with properly analyzed data were more likely to have been published in a journal with an impact factor listed in the top 100 according to the Journal Citation Reports index (12/16 or 69%, versus 1/21 or 5%, P ≤ 0.0001.Conclusion: The findings of this study raise concern that the majority of matched case-control studies report results that are derived from improper statistical analyses. This may lead to errors in estimating the relationship between a disease and exposure, as well as the incorrect adaptation of emerging medical literature.Keywords: case-control, matched, dependent data, statistics

  2. Statistical issues in quality control of proteomic analyses: good experimental design and planning.

    Science.gov (United States)

    Cairns, David A

    2011-03-01

    Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    OpenAIRE

    Dumičić Ksenija; Žmuk Berislav

    2015-01-01

    Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC) methods into the stock trading process. Metho...

  4. Use of a programmable desk-top calculator for the statistical quality control of radioimmunoassays.

    Science.gov (United States)

    Cernosek, S F; Gutierrez-Cernosek, R M

    1978-07-01

    We have developed an interactive statistical quality-control system for the small- to medium-sized radioimmunoassay laboratory, which can be used in a programmable desk-top calculator instead of the medium- or large-scale computer systems usually required. The design of this quality-control system is modeled after the suggestions of Rodbard and has three components. The first component evaluates the relationship between the measured response variable of the radioimmunoassay and the precision (or variance) of these measurements. This derived relationship is then used in the second component of the system as the basis for the weighting function used to calculate an interative, weighted, least squares regression of the logit-log transformation of the dose-response curve. The third component uses the quality-control parameters statistically calculated from the linearized dose-response curve to monitor whether the assay is "in-control". The calculator tabulates the means and confidence limits for the various parameters and can plot the statistical quality-control charts. The major benefit of this statistical quality-control system is that it allows the real-time computation and plotting of quality-control data with a programmable desk-top calculator.

  5. Use of Approximate Bayesian Computation to Assess and Fit Models of Mycobacterium leprae to Predict Outcomes of the Brazilian Control Program.

    Directory of Open Access Journals (Sweden)

    Rebecca Lee Smith

    Full Text Available Hansen's disease (leprosy elimination has proven difficult in several countries, including Brazil, and there is a need for a mathematical model that can predict control program efficacy. This study applied the Approximate Bayesian Computation algorithm to fit 6 different proposed models to each of the 5 regions of Brazil, then fitted hierarchical models based on the best-fit regional models to the entire country. The best model proposed for most regions was a simple model. Posterior checks found that the model results were more similar to the observed incidence after fitting than before, and that parameters varied slightly by region. Current control programs were predicted to require additional measures to eliminate Hansen's Disease as a public health problem in Brazil.

  6. Bayesian approach to noninferiority trials for proportions.

    Science.gov (United States)

    Gamalo, Mark A; Wu, Rui; Tiwari, Ram C

    2011-09-01

    Noninferiority trials are unique because they are dependent upon historical information in order to make meaningful interpretation of their results. Hence, a direct application of the Bayesian paradigm in sequential learning becomes apparently useful in the analysis. This paper describes a Bayesian procedure for testing noninferiority in two-arm studies with a binary primary endpoint that allows the incorporation of historical data on an active control via the use of informative priors. In particular, the posteriors of the response in historical trials are assumed as priors for its corresponding parameters in the current trial, where that treatment serves as the active control. The Bayesian procedure includes a fully Bayesian method and two normal approximation methods on the prior and/or on the posterior distributions. Then a common Bayesian decision criterion is used but with two prespecified cutoff levels, one for the approximation methods and the other for the fully Bayesian method, to determine whether the experimental treatment is noninferior to the active control. This criterion is evaluated and compared with the frequentist method using simulation studies in keeping with regulatory framework that new methods must protect type I error and arrive at a similar conclusion with existing standard strategies. Results show that both methods arrive at comparable conclusions of noninferiority when applied to a modified real data set. The advantage of the proposed Bayesian approach lies in its ability to provide posterior probabilities for effect sizes of the experimental treatment over the active control.

  7. Evaluation of a partial genome screening of two asthma susceptibility regions using bayesian network based bayesian multilevel analysis of relevance.

    Directory of Open Access Journals (Sweden)

    Ildikó Ungvári

    Full Text Available Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls. The results were evaluated with traditional frequentist methods and we applied a new statistical method, called bayesian network based bayesian multilevel analysis of relevance (BN-BMLA. This method uses bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated.With frequentist methods one SNP (rs3751464 in the FRMD6 gene provided evidence for an association with asthma (OR = 1.43(1.2-1.8; p = 3×10(-4. The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics.In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance.

  8. The Bayesian bridge between simple and universal kriging

    Energy Technology Data Exchange (ETDEWEB)

    Omre, H.; Halvorsen, K.B. (Norwegian Computing Center, Oslo (Norway))

    1989-10-01

    Kriging techniques are suited well for evaluation of continuous, spatial phenomena. Bayesian statistics are characterized by using prior qualified guesses on the model parameters. By merging kriging techniques and Bayesian theory, prior guesses may be used in a spatial setting. Partial knowledge of model parameters defines a continuum of models between what is named simple and universal kriging in geostatistical terminology. The Bayesian approach to kriging is developed and discussed, and a case study concerning depth conversion of seismic reflection times is presented.

  9. A Bayesian Method for Short-Term Probabilistic Forecasting of Photovoltaic Generation in Smart Grid Operation and Control

    Directory of Open Access Journals (Sweden)

    Gabriella Ferruzzi

    2013-02-01

    Full Text Available A new short-term probabilistic forecasting method is proposed to predict the probability density function of the hourly active power generated by a photovoltaic system. Firstly, the probability density function of the hourly clearness index is forecasted making use of a Bayesian auto regressive time series model; the model takes into account the dependence of the solar radiation on some meteorological variables, such as the cloud cover and humidity. Then, a Monte Carlo simulation procedure is used to evaluate the predictive probability density function of the hourly active power by applying the photovoltaic system model to the random sampling of the clearness index distribution. A numerical application demonstrates the effectiveness and advantages of the proposed forecasting method.

  10. Intraplate volcanism controlled by back-arc and continental structures in NE Asia inferred from transdimensional Bayesian ambient noise tomography

    Science.gov (United States)

    Kim, Seongryong; Tkalčić, Hrvoje; Rhie, Junkee; Chen, Youlin

    2016-08-01

    Intraplate volcanism adjacent to active continental margins is not simply explained by plate tectonics or plume interaction. Recent volcanoes in northeast (NE) Asia, including NE China and the Korean Peninsula, are characterized by heterogeneous tectonic structures and geochemical compositions. Here we apply a transdimensional Bayesian tomography to estimate high-resolution images of group and phase velocity variations (with periods between 8 and 70 s). The method provides robust estimations of velocity maps, and the reliability of results is tested through carefully designed synthetic recovery experiments. Our maps reveal two sublithospheric low-velocity anomalies that connect back-arc regions (in Japan and Ryukyu Trench) with current margins of continental lithosphere where the volcanoes are distributed. Combined with evidences from previous geochemical and geophysical studies, we argue that the volcanoes are related to the low-velocity structures associated with back-arc processes and preexisting continental lithosphere.

  11. Adaptive statistic tracking control based on two-step neural networks with time delays.

    Science.gov (United States)

    Yi, Yang; Guo, Lei; Wang, Hong

    2009-03-01

    This paper presents a new type of control framework for dynamical stochastic systems, called statistic tracking control (STC). The system considered is general and non-Gaussian and the tracking objective is the statistical information of a given target probability density function (pdf), rather than a deterministic signal. The control aims at making the statistical information of the output pdfs to follow those of a target pdf. For such a control framework, a variable structure adaptive tracking control strategy is first established using two-step neural network models. Following the B-spline neural network approximation to the integrated performance function, the concerned problem is transferred into the tracking of given weights. The dynamic neural network (DNN) is employed to identify the unknown nonlinear dynamics between the control input and the weights related to the integrated function. To achieve the required control objective, an adaptive controller based on the proposed DNN is developed so as to track a reference trajectory. Stability analysis for both the identification and tracking errors is developed via the use of Lyapunov stability criterion. Simulations are given to demonstrate the efficiency of the proposed approach.

  12. Human-centered sensor-based Bayesian control: Increased energy efficiency and user satisfaction in commercial lighting

    Science.gov (United States)

    Granderson, Jessica Ann

    2007-12-01

    The need for sustainable, efficient energy systems is the motivation that drove this research, which targeted the design of an intelligent commercial lighting system. Lighting in commercial buildings consumes approximately 13% of all the electricity generated in the US. Advanced lighting controls1 intended for use in commercial office spaces have proven to save up to 45% in electricity consumption. However, they currently comprise only a fraction of the market share, resulting in a missed opportunity to conserve energy. The research goals driving this dissertation relate directly to barriers hindering widespread adoption---increase user satisfaction, and provide increased energy savings through more sophisticated control. To satisfy these goals an influence diagram was developed to perform daylighting actuation. This algorithm was designed to balance the potentially conflicting lighting preferences of building occupants, with the efficiency desires of building facilities management. A supervisory control policy was designed to implement load shedding under a demand response tariff. Such tariffs offer incentives for customers to reduce their consumption during periods of peak demand, trough price reductions. In developing the value function occupant user testing was conducted to determine that computer and paper tasks require different illuminance levels, and that user preferences are sufficiently consistent to attain statistical significance. Approximately ten facilities managers were also interviewed and surveyed to isolate their lighting preferences with respect to measures of lighting quality and energy savings. Results from both simulation and physical implementation and user testing indicate that the intelligent controller can increase occupant satisfaction, efficiency, cost savings, and management satisfaction, with respect to existing commercial daylighting systems. Several important contributions were realized by satisfying the research goals. A general

  13. Statistical methods for quality assurance basics, measurement, control, capability, and improvement

    CERN Document Server

    Vardeman, Stephen B

    2016-01-01

    This undergraduate statistical quality assurance textbook clearly shows with real projects, cases and data sets how statistical quality control tools are used in practice. Among the topics covered is a practical evaluation of measurement effectiveness for both continuous and discrete data. Gauge Reproducibility and Repeatability methodology (including confidence intervals for Repeatability, Reproducibility and the Gauge Capability Ratio) is thoroughly developed. Process capability indices and corresponding confidence intervals are also explained. In addition to process monitoring techniques, experimental design and analysis for process improvement are carefully presented. Factorial and Fractional Factorial arrangements of treatments and Response Surface methods are covered. Integrated throughout the book are rich sets of examples and problems that help readers gain a better understanding of where and how to apply statistical quality control tools. These large and realistic problem sets in combination with the...

  14. Determination Of The Wear Fault In Spur Gear System Using Statistical Process Control Method

    Directory of Open Access Journals (Sweden)

    Sinan Maraş

    2014-01-01

    Full Text Available Vibration analysis is one of the early warning methods widely used in obtaining information about faults occurring on the machine elements and structures. In this method, gear fault detection can be performed by analyzing of the vibration test results using signal processing, artificial intelligence and statistical analysis methods. The objective of this study is detection the existence of wear by examining changes in the vibrations of spur gears due to wear faults in statistical process control carts. In this study, artificial wear was created on the surfaces of spur gears in order to be examined in gears test rig. Then, these gears were attached and vibrations data were recorded by operating the system at various loading and number of cycles conditions. Detection of fault was demonstrated by analyzing undeformed and worn gears data in statistical process control carts by means of real-time experimental studies.

  15. Methods and applications of statistics in engineering, quality control, and the physical sciences

    CERN Document Server

    Balakrishnan, N

    2011-01-01

    Inspired by the Encyclopedia of Statistical Sciences, Second Edition (ESS2e), this volume presents a concise, well-rounded focus on the statistical concepts and applications that are essential for understanding gathered data in the fields of engineering, quality control, and the physical sciences. The book successfully upholds the goals of ESS2e by combining both previously-published and newly developed contributions written by over 100 leading academics, researchers, and practitioner in a comprehensive, approachable format. The result is a succinct reference that unveils modern, cutting-edge approaches to acquiring and analyzing data across diverse subject areas within these three disciplines, including operations research, chemistry, physics, the earth sciences, electrical engineering, and quality assurance. In addition, techniques related to survey methodology, computational statistics, and operations research are discussed, where applicable. Topics of coverage include: optimal and stochastic control, arti...

  16. COMPARISON OF STATISTICALLY CONTROLLED MACHINING SOLUTIONS OF TITANIUM ALLOYS USING USM

    Directory of Open Access Journals (Sweden)

    R. Singh

    2010-06-01

    Full Text Available The purpose of the present investigation is to compare the statistically controlled machining solution of titanium alloys using ultrasonic machining (USM. In this study, the previously developed Taguchi model for USM of titanium and its alloys has been investigated and compared. Relationships between the material removal rate, tool wear rate, surface roughness and other controllable machining parameters (power rating, tool type, slurry concentration, slurry type, slurry temperature and slurry size have been deduced. The results of this study suggest that at the best settings of controllable machining parameters for titanium alloys (based upon the Taguchi design, the machining solution with USM is statistically controlled, which is not observed for other settings of input parameters on USM.

  17. Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...

  18. Using Statistical Process Control Charts to Study Stuttering Frequency Variability during a Single Day

    Science.gov (United States)

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark; Menzies, Ross; Packman, Ann

    2013-01-01

    Purpose: Stuttering varies between and within speaking situations. In this study, the authors used statistical process control charts with 10 case studies to investigate variability of stuttering frequency. Method: Participants were 10 adults who stutter. The authors counted the percentage of syllables stuttered (%SS) for segments of their speech…

  19. Using Statistical Process Control Charts to Study Stuttering Frequency Variability during a Single Day

    Science.gov (United States)

    Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark; Menzies, Ross; Packman, Ann

    2013-01-01

    Purpose: Stuttering varies between and within speaking situations. In this study, the authors used statistical process control charts with 10 case studies to investigate variability of stuttering frequency. Method: Participants were 10 adults who stutter. The authors counted the percentage of syllables stuttered (%SS) for segments of their speech…

  20. Mainstreaming Remedial Mathematics Students in Introductory Statistics: Results Using a Randomized Controlled Trial

    Science.gov (United States)

    Logue, Alexandra W.; Watanabe-Rose, Mari

    2014-01-01

    This study used a randomized controlled trial to determine whether students, assessed by their community colleges as needing an elementary algebra (remedial) mathematics course, could instead succeed at least as well in a college-level, credit-bearing introductory statistics course with extra support (a weekly workshop). Researchers randomly…

  1. Analyzing a Mature Software Inspection Process Using Statistical Process Control (SPC)

    Science.gov (United States)

    Barnard, Julie; Carleton, Anita; Stamper, Darrell E. (Technical Monitor)

    1999-01-01

    This paper presents a cooperative effort where the Software Engineering Institute and the Space Shuttle Onboard Software Project could experiment applying Statistical Process Control (SPC) analysis to inspection activities. The topics include: 1) SPC Collaboration Overview; 2) SPC Collaboration Approach and Results; and 3) Lessons Learned.

  2. Must a process be in statistical control before conducting designed experiments?

    NARCIS (Netherlands)

    Bisgaard, S.

    2008-01-01

    Fisher demonstrated three quarters of a century ago that the three key concepts of randomization, blocking, and replication make it possible to conduct experiments on processes that are not necessarily in a state of statistical control. However, even today there persists confusion about whether stat

  3. Analyzing a Mature Software Inspection Process Using Statistical Process Control (SPC)

    Science.gov (United States)

    Barnard, Julie; Carleton, Anita; Stamper, Darrell E. (Technical Monitor)

    1999-01-01

    This paper presents a cooperative effort where the Software Engineering Institute and the Space Shuttle Onboard Software Project could experiment applying Statistical Process Control (SPC) analysis to inspection activities. The topics include: 1) SPC Collaboration Overview; 2) SPC Collaboration Approach and Results; and 3) Lessons Learned.

  4. Fieldcrest Cannon, Inc. Advanced Technical Preparation. Statistical Process Control (SPC). PRE-SPC I. Instructor Book.

    Science.gov (United States)

    Averitt, Sallie D.

    This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills and instructing them in basic calculator…

  5. Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control

    DEFF Research Database (Denmark)

    Vanhatalo, Erik; Kulahci, Murat

    2015-01-01

    A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC), is that the data are independent in time. In many industrial processes, frequent sampling and process dynamics make this assumption unrealistic rendering sampled...

  6. Bayesian Source Separation and Localization

    CERN Document Server

    Knuth, K H

    1998-01-01

    The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...

  7. Bayesian inference on proportional elections.

    Directory of Open Access Journals (Sweden)

    Gabriel Hideki Vatanabe Brunello

    Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.

  8. Software for Spatial Statistics

    Directory of Open Access Journals (Sweden)

    Edzer Pebesma

    2015-02-01

    Full Text Available We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth, point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  9. Software for Spatial Statistics

    OpenAIRE

    Edzer Pebesma; Roger Bivand; Paulo Justiniano Ribeiro

    2015-01-01

    We give an overview of the papers published in this special issue on spatial statistics, of the Journal of Statistical Software. 21 papers address issues covering visualization (micromaps, links to Google Maps or Google Earth), point pattern analysis, geostatistics, analysis of areal aggregated or lattice data, spatio-temporal statistics, Bayesian spatial statistics, and Laplace approximations. We also point to earlier publications in this journal on the same topic.

  10. Inductive Logic and Statistics

    NARCIS (Netherlands)

    Romeijn, J. -W.

    2009-01-01

    This chapter concerns inductive logic in relation to mathematical statistics. I start by introducing a general notion of probabilistic induc- tive inference. Then I introduce Carnapian inductive logic, and I show that it can be related to Bayesian statistical inference via de Finetti's representatio

  11. Variations on Bayesian Prediction and Inference

    Science.gov (United States)

    2016-05-09

    Variations on Bayesian prediction and inference” Ryan Martin Department of Mathematics, Statistics , and Computer Science University of Illinois at Chicago...using statistical ideas/methods. We recently learned that this new project will be supported, in part, by the National Science Foundation. 2.2 Problem 2...41. Kalli, M., Griffin, J. E., Walker, S. G. (2011). Slice sampling mixture models. Statistics and Computing 21, 93–105. Koenker, R. (2005). Quantile

  12. spTimer: Spatio-Temporal Bayesian Modeling Using R

    Directory of Open Access Journals (Sweden)

    Khandoker Shuvo Bakar

    2015-02-01

    Full Text Available Hierarchical Bayesian modeling of large point-referenced space-time data is increasingly becoming feasible in many environmental applications due to the recent advances in both statistical methodology and computation power. Implementation of these methods using the Markov chain Monte Carlo (MCMC computational techniques, however, requires development of problem-specific and user-written computer code, possibly in a low-level language. This programming requirement is hindering the widespread use of the Bayesian model-based methods among practitioners and, hence there is an urgent need to develop high-level software that can analyze large data sets rich in both space and time. This paper develops the package spTimer for hierarchical Bayesian modeling of stylized environmental space-time monitoring data as a contributed software package in the R language that is fast becoming a very popular statistical computing platform. The package is able to fit, spatially and temporally predict large amounts of space-time data using three recently developed Bayesian models. The user is given control over many options regarding covariance function selection, distance calculation, prior selection and tuning of the implemented MCMC algorithms, although suitable defaults are provided. The package has many other attractive features such as on the fly transformations and an ability to spatially predict temporally aggregated summaries on the original scale, which saves the problem of storage when using MCMC methods for large datasets. A simulation example, with more than a million observations, and a real life data example are used to validate the underlying code and to illustrate the software capabilities.

  13. Using Alien Coins to Test Whether Simple Inference Is Bayesian

    Science.gov (United States)

    Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.

    2016-01-01

    Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…

  14. Bayesian Games with Intentions

    Directory of Open Access Journals (Sweden)

    Adam Bjorndahl

    2016-06-01

    Full Text Available We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  15. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  16. Statistical-Based Joint Power Control for Wireless Ad Hoc CDMA Networks

    Institute of Scientific and Technical Information of China (English)

    ZHANGShu; RONGMongtian; CHENBo

    2005-01-01

    Current power control algorithm for CDMA-based ad hoc networks contains SIR and interference measurement, which is based on history information. However, for the traffics in today's or future's network, important statistical property is burstiness. As a consequence, the interference at a given receiving node may fluctuate dramatically. So the convergence speed of power control is not fast and the performance degrades. This paper presents a joint power control model. To a receiving node, all transmitting nodes assigned in same time slot adjust their transmitter power based on current information, which takes into account the adjustments of transmitter power of other transmitting nodes. Based on the joint power control model, this paper proposes a statisticalbased power control algorithm. Through this new algorithm, the interference is estimated more exactly. The simulation results indicated that the proposed power control algorithm outperforms the old algorithm.

  17. Bayesian modeling in conjoint analysis

    Directory of Open Access Journals (Sweden)

    Janković-Milić Vesna

    2010-01-01

    Full Text Available Statistical analysis in marketing is largely influenced by the availability of various types of data. There is sudden increase in the number and types of information available to market researchers in the last decade. In such conditions, traditional statistical methods have limited ability to solve problems related to the expression of market uncertainty. The aim of this paper is to highlight the advantages of bayesian inference, as an alternative approach to classical inference. Multivariate statistic methods offer extremely powerful tools to achieve many goals of marketing research. One of these methods is the conjoint analysis, which provides a quantitative measure of the relative importance of product or service attributes in relation to the other attribute. The application of this method involves interviewing consumers, where they express their preferences, and statistical analysis provides numerical indicators of each attribute utility. One of the main objections to the method of discrete choice in the conjoint analysis is to use this method to estimate the utility only at the aggregate level and by expressing the average utility for all respondents in the survey. Application of hierarchical Bayesian models enables capturing of individual utility ratings for each attribute level.

  18. Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method

    Institute of Scientific and Technical Information of China (English)

    WANG Wei(王玮); CAI LianHong(蔡莲红)

    2003-01-01

    Bayesian approach is an important method in statistics. The Bayesian belief network is a powerful knowledge representation and reasoning tool under the conditions of uncertainty.It is a graphics model that encodes probabilistic relationships among variables of interest. In this paper, an approach to Bayesian network construction is given for discovering the Chinese linguistic parameter relationship in the corpus.

  19. Empirical Bayesian significance measure of neuronal spike response.

    Science.gov (United States)

    Oba, Shigeyuki; Nakae, Ken; Ikegaya, Yuji; Aki, Shunsuke; Yoshimoto, Junichiro; Ishii, Shin

    2016-05-21

    Functional connectivity analyses of multiple neurons provide a powerful bottom-up approach to reveal functions of local neuronal circuits by using simultaneous recording of neuronal activity. A statistical methodology, generalized linear modeling (GLM) of the spike response function, is one of the most promising methodologies to reduce false link discoveries arising from pseudo-correlation based on common inputs. Although recent advancement of fluorescent imaging techniques has increased the number of simultaneously recoded neurons up to the hundreds or thousands, the amount of information per pair of neurons has not correspondingly increased, partly because of the instruments' limitations, and partly because the number of neuron pairs increase in a quadratic manner. Consequently, the estimation of GLM suffers from large statistical uncertainty caused by the shortage in effective information. In this study, we propose a new combination of GLM and empirical Bayesian testing for the estimation of spike response functions that enables both conservative false discovery control and powerful functional connectivity detection. We compared our proposed method's performance with those of sparse estimation of GLM and classical Granger causality testing. Our method achieved high detection performance of functional connectivity with conservative estimation of false discovery rate and q values in case of information shortage due to short observation time. We also showed that empirical Bayesian testing on arbitrary statistics in place of likelihood-ratio statistics reduce the computational cost without decreasing the detection performance. When our proposed method was applied to a functional multi-neuron calcium imaging dataset from the rat hippocampal region, we found significant functional connections that are possibly mediated by AMPA and NMDA receptors. The proposed empirical Bayesian testing framework with GLM is promising especially when the amount of information per a

  20. Evaluation of logistic Bayesian LASSO for identifying association with rare haplotypes.

    Science.gov (United States)

    Biswas, Swati; Papachristou, Charalampos

    2014-01-01

    It has been hypothesized that rare variants may hold the key to unraveling the genetic transmission mechanism of many common complex traits. Currently, there is a dearth of statistical methods that are powerful enough to detect association with rare haplotypes. One of the recently proposed methods is logistic Bayesian LASSO for case-control data. By penalizing the regression coefficients through appropriate priors, logistic Bayesian LASSO weeds out the unassociated haplotypes, making it possible for the associated rare haplotypes to be detected with higher powers. We used the Genetic Analysis Workshop 18 simulated data to evaluate the behavior of logistic Bayesian LASSO in terms of its power and type I error under a complex disease model. We obtained knowledge of the simulation model, including the locations of the functional variants, and we chose to focus on two genomic regions in the MAP4 gene on chromosome 3. The sample size was 142 individuals and there were 200 replicates. Despite the small sample size, logistic Bayesian LASSO showed high power to detect two haplotypes containing functional variants in these regions while maintaining low type I errors. At the same time, a commonly used approach for haplotype association implemented in the software hapassoc failed to converge because of the presence of rare haplotypes. Thus, we conclude that logistic Bayesian LASSO can play an important role in the search for rare haplotypes.

  1. Bayesian astrostatistics: a backward look to the future

    CERN Document Server

    Loredo, Thomas J

    2012-01-01

    This perspective chapter briefly surveys: (1) past growth in the use of Bayesian methods in astrophysics; (2) current misconceptions about both frequentist and Bayesian statistical inference that hinder wider adoption of Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian modeling as a major future direction for research in Bayesian astrostatistics, exemplified in part by presentations at the first ISI invited session on astrostatistics, commemorated in this volume. It closes with an intentionally provocative recommendation for astronomical survey data reporting, motivated by the multilevel Bayesian perspective on modeling cosmic populations: that astronomers cease producing catalogs of estimated fluxes and other source properties from surveys. Instead, summaries of likelihood functions (or marginal likelihood functions) for source properties should be reported (not posterior probability density functions), including nontrivial summaries (not simply upper limits) for candidate objects ...

  2. An introduction to using Bayesian linear regression with clinical data.

    Science.gov (United States)

    Baldwin, Scott A; Larson, Michael J

    2017-11-01

    Statistical training psychology focuses on frequentist methods. Bayesian methods are an alternative to standard frequentist methods. This article provides researchers with an introduction to fundamental ideas in Bayesian modeling. We use data from an electroencephalogram (EEG) and anxiety study to illustrate Bayesian models. Specifically, the models examine the relationship between error-related negativity (ERN), a particular event-related potential, and trait anxiety. Methodological topics covered include: how to set up a regression model in a Bayesian framework, specifying priors, examining convergence of the model, visualizing and interpreting posterior distributions, interval estimates, expected and predicted values, and model comparison tools. We also discuss situations where Bayesian methods can outperform frequentist methods as well has how to specify more complicated regression models. Finally, we conclude with recommendations about reporting guidelines for those using Bayesian methods in their own research. We provide data and R code for replicating our analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    Science.gov (United States)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  4. Bayesian Modeling of a Human MMORPG Player

    CERN Document Server

    Synnaeve, Gabriel

    2010-01-01

    This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.

  5. Bayesian Modeling of a Human MMORPG Player

    Science.gov (United States)

    Synnaeve, Gabriel; Bessière, Pierre

    2011-03-01

    This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.

  6. Capabilities of Statistical Residual-Based Control Charts in Short- and Long-Term Stock Trading

    Directory of Open Access Journals (Sweden)

    Žmuk Berislav

    2016-03-01

    Full Text Available The aim of this paper is to introduce and develop additional statistical tools to support the decision-making process in stock trading. The prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the paper. The conducted trading simulations, based on the residual-based control charts, led to an investor’s profit in 67.92% cases. In the short run, the residual-based cumulative sum (CUSUM control chart led to the highest portfolio profits. In the long run, when average stock prices were used and 2-sigma control limits set, the residual-based exponential weighted moving average control chart had the highest portfolio profit. In all other cases in the long run, the CUSUM control chart appeared to be the best choice. The acknowledgment that the SPC methods can be successfully used in stock trading will, hopefully, increase their use in this field.

  7. ANALISIS KEHILANGAN MINYAK PADA CRUDE PALM OIL (CPO DENGAN MENGGUNAKAN METODE STATISTICAL PROCESS CONTROL

    Directory of Open Access Journals (Sweden)

    Vera Devani

    2014-06-01

    Full Text Available PKS “XYZ” merupakan perusahaan yang bergerak di bidang pengolahan kelapa sawit. Produk yang dihasilkan adalah Crude Palm Oil (CPO dan Palm Kernel Oil (PKO. Tujuan penelitian ini adalah menganalisa kehilangan minyak (oil losses dan faktor-faktor penyebab dengan menggunakan metoda Statistical Process Control. Statistical Process Control adalah sekumpulan strategi, teknik, dan tindakan yang diambil oleh sebuah organisasi untuk memastikan bahwa strategi tersebut menghasilkan produk yang berkualitas atau menyediakan pelayanan yang berkualitas. Sampel terjadinya oil losses pada CPO yang diteliti adalah tandan kosong (tankos, biji (nut, ampas (fibre, dan sludge akhir. Berdasarkan Peta Kendali I-MR dapat disimpulkan bahwa kondisi keempat jenis oil losses CPO berada dalam batas kendali dan konsisten. Sedangkan nilai Cpk dari total oil losses berada di luar batas kendali rata-rata proses, hal ini berarti CPO yang diproduksi telah memenuhi kebutuhan pelanggan, dengan total oil losses kurang dari batas maksimum yang ditetapkan oleh perusahaan yaitu 1,65%.

  8. Bayesian second law of thermodynamics.

    Science.gov (United States)

    Bartolotta, Anthony; Carroll, Sean M; Leichenauer, Stefan; Pollack, Jason

    2016-08-01

    We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as ΔH(ρ_{m},ρ)+〈Q〉_{F|m}≥0, where ΔH(ρ_{m},ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρ_{m} and 〈Q〉_{F|m} is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.

  9. Bayesian second law of thermodynamics

    Science.gov (United States)

    Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason

    2016-08-01

    We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.

  10. On-the-Fly Massively Multitemporal Change Detection Using Statistical Quality Control Charts and Landsat Data

    OpenAIRE

    2014-01-01

    One challenge to implementing spectral change detection algorithms using multitemporal Landsat data is that key dates and periods are often missing from the record due to weather disturbances and lapses in continuous coverage. This paper presents a method that utilizes residuals from harmonic regression over years of Landsat data, in conjunction with statistical quality control charts, to signal subtle disturbances in vegetative cover. These charts are able to detect changes from both defores...

  11. Multivariate Statistical Process Control and Case-Based Reasoning for situation assessment of Sequencing Batch Reactors

    OpenAIRE

    Ruiz Ordóñez, Magda Liliana

    2008-01-01

    ABSRACTThis thesis focuses on the monitoring, fault detection and diagnosis of Wastewater Treatment Plants (WWTP), which are important fields of research for a wide range of engineering disciplines. The main objective is to evaluate and apply a novel artificial intelligent methodology based on situation assessment for monitoring and diagnosis of Sequencing Batch Reactor (SBR) operation. To this end, Multivariate Statistical Process Control (MSPC) in combination with Case-Based Reasoning (CBR)...

  12. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  13. Pitch Motion Stabilization by Propeller Speed Control Using Statistical Controller Design

    DEFF Research Database (Denmark)

    Nakatani, Toshihiko; Blanke, Mogens; Galeazzi, Roberto

    2006-01-01

    This paper describes dynamics analysis of a small training boat and a possibility of ship pitch stabilization by control of propeller speed. After upgrading the navigational system of an actual small training boat, in order to identify the model of the ship, the real data collected by sea trials...

  14. A Bayesian analysis of the chromosome architecture of human disorders by integrating reductionist data

    OpenAIRE

    Emmert-Streib, Frank; de Matos Simoes, Ricardo; Tripathi, Shailesh; Glazko, Galina V.; Dehmer, Matthias

    2012-01-01

    In this paper, we present a Bayesian approach to estimate a chromosome and a disorder network from the Online Mendelian Inheritance in Man (OMIM) database. In contrast to other approaches, we obtain statistic rather than deterministic networks enabling a parametric control in the uncertainty of the underlying disorder-disease gene associations contained in the OMIM, on which the networks are based. From a structural investigation of the chromosome network, we identify three chromosome subgrou...

  15. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  16. Neural network classification - A Bayesian interpretation

    Science.gov (United States)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  17. Automatic Thesaurus Construction Using Bayesian Networks.

    Science.gov (United States)

    Park, Young C.; Choi, Key-Sun

    1996-01-01

    Discusses automatic thesaurus construction and characterizes the statistical behavior of terms by using an inference network. Highlights include low-frequency terms and data sparseness, Bayesian networks, collocation maps and term similarity, constructing a thesaurus from a collocation map, and experiments with test collections. (Author/LRW)

  18. Adaptive bayesian analysis for binomial proportions

    CSIR Research Space (South Africa)

    Das, Sonali

    2008-10-01

    Full Text Available The authors consider the problem of statistical inference of binomial proportions for non-matched, correlated samples, under the Bayesian framework. Such inference can arise when the same group is observed at a different number of times with the aim...

  19. Approximation for Bayesian Ability Estimation.

    Science.gov (United States)

    1987-02-18

    posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31

  20. The role of control groups in mutagenicity studies: matching biological and statistical relevance.

    Science.gov (United States)

    Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane

    2003-06-01

    The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.

  1. Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology

    CERN Document Server

    Hortúa, Héctor J

    2014-01-01

    These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.

  2. Konstruksi Bayesian Network Dengan Algoritma Bayesian Association Rule Mining Network

    OpenAIRE

    Octavian

    2015-01-01

    Beberapa tahun terakhir, Bayesian Network telah menjadi konsep yang populer digunakan dalam berbagai bidang kehidupan seperti dalam pengambilan sebuah keputusan dan menentukan peluang suatu kejadian dapat terjadi. Sayangnya, pengkonstruksian struktur dari Bayesian Network itu sendiri bukanlah hal yang sederhana. Oleh sebab itu, penelitian ini mencoba memperkenalkan algoritma Bayesian Association Rule Mining Network untuk memudahkan kita dalam mengkonstruksi Bayesian Network berdasarkan data ...

  3. Statistical Inference: The Big Picture.

    Science.gov (United States)

    Kass, Robert E

    2011-02-01

    Statistics has moved beyond the frequentist-Bayesian controversies of the past. Where does this leave our ability to interpret results? I suggest that a philosophy compatible with statistical practice, labelled here statistical pragmatism, serves as a foundation for inference. Statistical pragmatism is inclusive and emphasizes the assumptions that connect statistical models with observed data. I argue that introductory courses often mis-characterize the process of statistical inference and I propose an alternative "big picture" depiction.

  4. ARBITRATION CORRECTNESS OF STATISTICAL QUALITY CONTROL PROCEDURES ACCORDING TO PRIORITY DISTRIBUTION PRINCIPLE

    Directory of Open Access Journals (Sweden)

    V. A. Khavruk

    2010-01-01

    Full Text Available An application of a priority distribution principle provides a possibility to establish rights and obligations of the parties while executing statistical quality control of products.Procedures of quality control from the side of a manufacturer and a consumer according to PDP are accompanied by certain decisions taken by the parties which in their turn are components of the arbitration characteristic. Arbitration characteristics show probability dependences of  an arbitration situation occurrence on quality indices, parameters of  control plans and decision-making rules.The paper considers cases when control procedures of  a supplier and a  consumer are correct.  The paper also shows that unified control plans and a decision-­making rules do not guarantee generally a correctness of double control procedures. The paper reveals that an intervention of standardization bodies  which is estimated by different amount of expenses depending on a choice of control plans and results is required  in order to ensure a correctness of the control procedures by the parties.

  5. Machine learning a Bayesian and optimization perspective

    CERN Document Server

    Theodoridis, Sergios

    2015-01-01

    This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches, which rely on optimization techniques, as well as Bayesian inference, which is based on a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as shor...

  6. IMPROVING QUALITY OF STATISTICAL PROCESS CONTROL BY DEALING WITH NON‐NORMAL DATA IN AUTOMOTIVE INDUSTRY

    Directory of Open Access Journals (Sweden)

    Zuzana ANDRÁSSYOVÁ

    2012-07-01

    Full Text Available Study deals with an analysis of data to the effect that it improves the quality of statistical tools in processes of assembly of automobile seats. Normal distribution of variables is one of inevitable conditions for the analysis, examination, and improvement of the manufacturing processes (f. e.: manufacturing process capability although, there are constantly more approaches to non‐normal data handling. An appropriate probability distribution of measured data is firstly tested by the goodness of fit of empirical distribution with theoretical normal distribution on the basis of hypothesis testing using programme StatGraphics Centurion XV.II. Data are collected from the assembly process of 1st row automobile seats for each characteristic of quality (Safety Regulation ‐S/R individually. Study closely processes the measured data of an airbag´s assembly and it aims to accomplish the normal distributed data and apply it the statistical process control. Results of the contribution conclude in a statement of rejection of the null hypothesis (measured variables do not follow the normal distribution therefore it is necessary to begin to work on data transformation supported by Minitab15. Even this approach does not reach a normal distributed data and so should be proposed a procedure that leads to the quality output of whole statistical control of manufacturing processes.

  7. "APEC Blue" association with emission control and meteorological conditions detected by multi-scale statistics

    Science.gov (United States)

    Wang, Ping; Dai, Xin-Gang

    2016-09-01

    The term "APEC Blue" has been created to describe the clear sky days since the Asia-Pacific Economic Cooperation (APEC) summit held in Beijing during November 5-11, 2014. The duration of the APEC Blue is detected from November 1 to November 14 (hereafter Blue Window) by moving t test in statistics. Observations show that APEC Blue corresponds to low air pollution with respect to PM2.5, PM10, SO2, and NO2 under strict emission-control measures (ECMs) implemented in Beijing and surrounding areas. Quantitative assessment shows that ECM is more effective on reducing aerosols than the chemical constituents. Statistical investigation has revealed that the window also resulted from intensified wind variability, as well as weakened static stability of atmosphere (SSA). The wind and ECMs played key roles in reducing air pollution during November 1-7 and 11-13, and strict ECMs and weak SSA become dominant during November 7-10 under weak wind environment. Moving correlation manifests that the emission reduction for aerosols can increase the apparent wind cleanup effect, leading to significant negative correlations of them, and the period-wise changes in emission rate can be well identified by multi-scale correlations basing on wavelet decomposition. In short, this case study manifests statistically how human interference modified air quality in the mega city through controlling local and surrounding emissions in association with meteorological condition.

  8. A Statistical Quality Control System in a Medium-Scale Weaving Mill: I. Control of Fabric Defects

    Directory of Open Access Journals (Sweden)

    Özlem DÜLGEROĞLU KISAOĞLU

    2010-03-01

    Full Text Available Establishing a control system in a medium-scale weaving mill by using statistical process control techniques was worked on and considered that the control system also would be a model for weaving mills. To determine production parameters and control them according to a definite sampling plan, the loom stoppages were investigated with their reasons and the defects on the fabric were detected during running of looms in process inspection. Therefore three different kinds of cord and a poplin fabric were observed. Analysis of loom-stoppages gives information about problems which are present in yarns, weaving preparation, weaving processes, and particularly indicates yarn quality, weaving machine adjustments and weaving preparation processes. In conclusion of woven fabric quality evaluations it was determined that the rate of defects originated from yarn is rather higher than that of defects originated from weaving preparation/weaving process.

  9. A Statistical Quality Control System in a Medium-Scale Weaving Mill: II. Control of Loom Stoppages

    Directory of Open Access Journals (Sweden)

    Özlem DÜLGEROĞLU KISAOĞLU

    2010-03-01

    Full Text Available Establishing a control system in a medium-scale weaving mill by using statistical process control techniques was worked on and considered that the control system also would be a model for weaving mills. To determine production parameters and control them according to a definite sampling plan, the loom stoppages were investigated with their reasons and the defects on the fabric were detected during running of looms in process inspection. Therefore three different kinds of cord and a poplin fabric were observed. Analysis of loom-stoppages gives information about problems which are present in yarns, weaving preparation, weaving processes, and particularly indicates yarn quality, weaving machine adjustments and weaving preparation processes. In conclusion of woven fabric quality evaluations it was determined that the rate of defects originated from yarn is rather higher than that of defects originated from weaving preparation/weaving process.

  10. Gráficos de controle X para processos robustos Statistical control of robust processes

    Directory of Open Access Journals (Sweden)

    Antonio Fernando Branco Costa

    1998-12-01

    Full Text Available O avanço tecnológico tem tornado os processos produtivos cada vez mais robustos (robustos no sentido de produzirem itens cada vez mais iguais. Neste contexto, pequenas alterações no processo podem ser críticas, devendo portanto ser eliminadas com rapidez. Tradicionalmente, são utilizados os gráficos das somas acumuladas ou das médias móveis para controlar tais processos. O objetivo deste trabalho é mostrar que os gráficos de Shewhart também podem ser utilizados para controlar processos robustos, bastando variar seus parâmetros de projeto de maneira apropriada.Technological development has reduced the variability between items produced on s large scale. Today, a small change in the process can be critical, requiring rapid action to eliminate it. Traditionally, the CUSUM and the moving mean are the charts used to control these processes that we call robust processes. The aim of this paper is to show that the Shewhart chart can also be used to control robust processes, if the constraint of fixed design parameters is relaxed.

  11. One approach in using multivariate statistical process control in analyzing cheese quality

    Directory of Open Access Journals (Sweden)

    Ilija Djekic

    2015-05-01

    Full Text Available The objective of this paper was to investigate possibility of using multivariate statistical process control in analysing cheese quality parameters. Two cheese types (white brined cheeses and soft cheese from ultra-filtered milk were selected and analysed for several quality parameters such as dry matter, milk fat, protein contents, pH, NaCl, fat in dry matter and moisture in non-fat solids. The obtained results showed significant variations for most of the quality characteristics which were examined among the two types of cheese. The only stable parameter in both types of cheese was moisture in non-fat solids. All of the other cheese quality characteristics were characterized above or below control limits for most of the samples. Such results indicated a high instability and variations within cheese production. Although the use of statistical process control is not mandatory in the dairy industry, it might provide benefits to organizations in improving quality control of dairy products.

  12. Valid statistical inference methods for a case-control study with missing data.

    Science.gov (United States)

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2016-05-19

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  13. Ultrasonic attenuation spectroscopy for multivariate statistical process control in nanomaterial processing

    Institute of Scientific and Technical Information of China (English)

    Bundit Boonkhao; Xue Z. Wang

    2012-01-01

    Ultrasonic attenuation spectroscopy (UAS) is an attractive process analytical technology (PAT) for on-line real-time characterisation of slurries for particle size distribution (PSD) estimation.It is however only applicable to relatively low solid concentrations since existing instrument process models still cannot fully take into account the phenomena of particle-particle interaction and multiple scattering,leading to errors in PSD estimation.This paper investigates an alternative use of the raw attenuation spectra for direct multivariate statistical process control (MSPC).The UAS raw spectra were processed using principal component analysis.The selected principal components were used to derive two MSPC statistics,the Hotelling's T2 and square prediction error (SPE).The method is illustrated and demonstrated by reference to a wet milling process for processing nanoparticles.

  14. Model Diagnostics for Bayesian Networks

    Science.gov (United States)

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  15. Communicating population health statistics through graphs: a randomised controlled trial of graph design interventions

    Directory of Open Access Journals (Sweden)

    Macdonald Robin

    2006-12-01

    Full Text Available Abstract Background Australian epidemiologists have recognised that lay readers have difficulty understanding statistical graphs in reports on population health. This study aimed to provide evidence for graph design improvements that increase comprehension by non-experts. Methods This was a double-blind, randomised, controlled trial of graph-design interventions, conducted as a postal survey. Control and intervention participants were randomly selected from telephone directories of health system employees. Eligible participants were on duty at the listed location during the study period. Controls received a booklet of 12 graphs from original publications, and intervention participants received a booklet of the same graphs with design modifications. A questionnaire with 39 interpretation tasks was included with the booklet. Interventions were assessed using the ratio of the prevalence of correct responses given by the intervention group to those given by the control group for each task. Results The response rate from 543 eligible participants (261 intervention and 282 control was 67%. The prevalence of correct answers in the control group ranged from 13% for a task requiring knowledge of an acronym to 97% for a task identifying the largest category in a pie chart. Interventions producing the greatest improvement in comprehension were: changing a pie chart to a bar graph (3.6-fold increase in correct point reading, changing the y axis of a graph so that the upward direction represented an increase (2.9-fold increase in correct judgement of trend direction, a footnote to explain an acronym (2.5-fold increase in knowledge of the acronym, and matching the y axis range of two adjacent graphs (two-fold increase in correct comparison of the relative difference in prevalence between two population subgroups. Conclusion Profound population health messages can be lost through use of overly technical language and unfamiliar statistical measures. In our

  16. A statistical methodology to improve accuracy in differentiating schizophrenia patients from healthy controls.

    Science.gov (United States)

    Peters, Rosalind M; Gjini, Klevest; Templin, Thomas N; Boutros, Nash N

    2014-05-30

    We present a methodology to statistically discriminate among univariate and multivariate indices to improve accuracy in differentiating schizophrenia patients from healthy controls. Electroencephalogram data from 71 subjects (37 controls/34 patients) were analyzed. Data included P300 event-related response amplitudes and latencies as well as amplitudes and sensory gating indices derived from the P50, N100, and P200 auditory-evoked responses resulting in 20 indices analyzed. Receiver operator characteristic (ROC) curve analyses identified significant univariate indices; these underwent principal component analysis (PCA). Logistic regression of PCA components created a multivariate composite used in the final ROC. Eleven univariate ROCs were significant with area under the curve (AUC) >0.50. PCA of these indices resulted in a three-factor solution accounting for 76.96% of the variance. The first factor was defined primarily by P200 and P300 amplitudes, the second by P50 ratio and difference scores, and the third by P300 latency. ROC analysis using the logistic regression composite resulted in an AUC of 0.793 (0.06), p<0.001 (CI=0.685-0.901). A composite score of 0.456 had a sensitivity of 0.829 (correctly identifying schizophrenia patients) and a specificity of 0.703 (correctly identifying healthy controls). Results demonstrated the usefulness of combined statistical techniques in creating a multivariate composite that improves diagnostic accuracy.

  17. Bilingualism and inhibitory control influence statistical learning of novel word forms

    Directory of Open Access Journals (Sweden)

    James eBartolotti

    2011-11-01

    Full Text Available We examined the influence of bilingual experience and inhibitory control on the ability to learn a novel language. Using a statistical learning paradigm, participants learned words in two novel languages that were based on the International Morse Code. First, participants listened to a continuous stream of words in a Morse code language to test their ability to segment words from continuous speech. Since Morse code does not overlap in form with natural languages, interference from known languages was low. Next, participants listened to another Morse code language composed of new words that conflicted with the first Morse code language. Interference in this second language was high due to conflict between languages and due to the presence of two colliding cues (compressed pauses between words and statistical regularities that competed to define word boundaries. Results suggest that bilingual experience can improve word learning when interference from other languages is low, while inhibitory control ability can improve word learning when interference from other languages is high. We conclude that the ability to extract novel words from continuous speech is a skill that is affected both by linguistic factors, such as bilingual experience, and by cognitive abilities, such as inhibitory control.

  18. Using statistical process control to make data-based clinical decisions.

    Science.gov (United States)

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.

  19. Bayesian Estimation of Thermonuclear Reaction Rates

    CERN Document Server

    Iliadis, Christian; Coc, Alain; Timmes, Frank; Starrfield, Sumner

    2016-01-01

    The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied in the past to this problem, all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extra-solar planets, gravitational waves, and type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present the first astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the d(p,$\\gamma$)$^3$He, $^3$He($^3$He,2p)$^4$He, and $^3$He($\\alpha$,$\\gamma$)$^7$Be reactions,...

  20. Bayesian Estimation of Thermonuclear Reaction Rates

    Science.gov (United States)

    Iliadis, C.; Anderson, K. S.; Coc, A.; Timmes, F. X.; Starrfield, S.

    2016-11-01

    The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p,γ)3He, 3He(3He,2p)4He, and 3He(α,γ)7Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.

  1. Impact analysis of critical success factors on the benefits from statistical process control implementation

    Directory of Open Access Journals (Sweden)

    Fabiano Rodrigues Soriano

    Full Text Available Abstract The Statistical Process Control - SPC is a set of statistical techniques focused on process control, monitoring and analyzing variation causes in the quality characteristics and/or in the parameters used to control and process improvements. Implementing SPC in organizations is a complex task. The reasons for its failure are related to organizational or social factors such as lack of top management commitment and little understanding about its potential benefits. Other aspects concern technical factors such as lack of training on and understanding about the statistical techniques. The main aim of the present article is to understand the interrelations between conditioning factors associated with top management commitment (Support, SPC Training and Application, as well as to understand the relationships between these factors and the benefits associated with the implementation of the program. The Partial Least Squares Structural Equation Modeling (PLS-SEM was used in the analysis since the main goal is to establish the causal relations. A cross-section survey was used as research method to collect information of samples from Brazilian auto-parts companies, which were selected according to guides from the auto-parts industry associations. A total of 170 companies were contacted by e-mail and by phone in order to be invited to participate in the survey. However, just 93 industries agreed on participating, and only 43 answered the questionnaire. The results showed that the senior management support considerably affects the way companies develop their training programs. In turn, these trainings affect the way companies apply the techniques. Thus, it will reflect on the benefits gotten from implementing the program. It was observed that the managerial and technical aspects are closely connected to each other and that they are represented by the ratio between top management and training support. The technical aspects observed through SPC

  2. Bayesian data analysis for newcomers.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2017-04-12

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  3. A hybrid Bayesian hierarchical model combining cohort and case-control studies for meta-analysis of diagnostic tests: Accounting for partial verification bias.

    Science.gov (United States)

    Ma, Xiaoye; Chen, Yong; Cole, Stephen R; Chu, Haitao

    2014-05-26

    To account for between-study heterogeneity in meta-analysis of diagnostic accuracy studies, bivariate random effects models have been recommended to jointly model the sensitivities and specificities. As study design and population vary, the definition of disease status or severity could differ across studies. Consequently, sensitivity and specificity may be correlated with disease prevalence. To account for this dependence, a trivariate random effects model had been proposed. However, the proposed approach can only include cohort studies with information estimating study-specific disease prevalence. In addition, some diagnostic accuracy studies only select a subset of samples to be verified by the reference test. It is known that ignoring unverified subjects may lead to partial verification bias in the estimation of prevalence, sensitivities, and specificities in a single study. However, the impact of this bias on a meta-analysis has not been investigated. In this paper, we propose a novel hybrid Bayesian hierarchical model combining cohort and case-control studies and correcting partial verification bias at the same time. We investigate the performance of the proposed methods through a set of simulation studies. Two case studies on assessing the diagnostic accuracy of gadolinium-enhanced magnetic resonance imaging in detecting lymph node metastases and of adrenal fluorine-18 fluorodeoxyglucose positron emission tomography in characterizing adrenal masses are presented.

  4. Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.

    Science.gov (United States)

    Westgard, James O; Westgard, Sten A

    2017-03-01

    Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications.

  5. Monitoring Actuarial Present Values of Term Life Insurance By a Statistical Process Control Chart

    Science.gov (United States)

    Hafidz Omar, M.

    2015-06-01

    Tracking performance of life insurance or similar insurance policy using standard statistical process control chart is complex because of many factors. In this work, we present the difficulty in doing so. However, with some modifications of the SPC charting framework, the difficulty can be manageable to the actuaries. So, we propose monitoring a simpler but natural actuarial quantity that is typically found in recursion formulas of reserves, profit testing, as well as present values. We shared some simulation results for the monitoring process. Additionally, some advantages of doing so is discussed.

  6. Using statistical process control methodology to improve the safe operating envelope

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, A.D.; Lunney, B.P.; McIntyre, C.M. [Atlantic Nuclear Services Ltd. (ANSL), Fredericton, New Brunswick (Canada); Prime, D.R. [New Brunswick Power Nuclear (NBPN), Lepreau, New Brunswick (Canada)

    2009-07-01

    Failure limits used to assess impairments from Operating Manual Tests (OMT) are often established using licensing limits from safety analysis. While these determine that licensing conditions are not violated, they do not provide pro-active indications of problems developing with system components. This paper discusses statistical process control (SPC) methods to define action limits useful in diagnosing system component problems prior to reaching impairment limits. Using data from a specific OMT, an example of one such application is provided. Application of SPC limits can provide an improvement to station operating economics through early detection of abnormal equipment behaviour. (author)

  7. Preliminary Retrospective Analysis of Daily Tomotherapy Output Constancy Checks Using Statistical Process Control.

    Directory of Open Access Journals (Sweden)

    Emilio Mezzenga

    Full Text Available The purpose of this study was to retrospectively evaluate the results from a Helical TomoTherapy Hi-Art treatment system relating to quality controls based on daily static and dynamic output checks using statistical process control methods. Individual value X-charts, exponentially weighted moving average charts, and process capability and acceptability indices were used to monitor the treatment system performance. Daily output values measured from January 2014 to January 2015 were considered. The results obtained showed that, although the process was in control, there was an out-of-control situation in the principal maintenance intervention for the treatment system. In particular, process capability indices showed a decreasing percentage of points in control which was, however, acceptable according to AAPM TG148 guidelines. Our findings underline the importance of restricting the acceptable range of daily output checks and suggest a future line of investigation for a detailed process control of daily output checks for the Helical TomoTherapy Hi-Art treatment system.

  8. When mechanism matters: Bayesian forecasting using models of ecological diffusion

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.

    2017-01-01

    Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.

  9. Bayesian psychometric scaling

    NARCIS (Netherlands)

    Fox, G.J.A.; Berg, van den S.M.; Veldkamp, B.P.; Irwing, P.; Booth, T.; Hughes, D.

    2015-01-01

    In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item resp

  10. Bayesian psychometric scaling

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; van den Berg, Stéphanie Martine; Veldkamp, Bernard P.; Irwing, P.; Booth, T.; Hughes, D.

    2015-01-01

    In educational and psychological studies, psychometric methods are involved in the measurement of constructs, and in constructing and validating measurement instruments. Assessment results are typically used to measure student proficiency levels and test characteristics. Recently, Bayesian item

  11. Computationally efficient Bayesian inference for inverse problems.

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.

    2007-10-01

    Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.

  12. Bayesian networks in educational assessment

    CERN Document Server

    Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M

    2015-01-01

    Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...

  13. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  14. A statistical model of uplink inter-cell interference with slow and fast power control mechanisms

    KAUST Repository

    Tabassum, Hina

    2013-09-01

    Uplink power control is in essence an interference mitigation technique that aims at minimizing the inter-cell interference (ICI) in cellular networks by reducing the transmit power levels of the mobile users while maintaining their target received signal quality levels at base stations. Power control mechanisms directly impact the interference dynamics and, thus, affect the overall achievable capacity and consumed power in cellular networks. Due to the stochastic nature of wireless channels and mobile users\\' locations, it is important to derive theoretical models for ICI that can capture the impact of design alternatives related to power control mechanisms. To this end, we derive and verify a novel statistical model for uplink ICI in Generalized-K composite fading environments as a function of various slow and fast power control mechanisms. The derived expressions are then utilized to quantify numerically key network performance metrics that include average resource fairness, average reduction in power consumption, and ergodic capacity. The accuracy of the derived expressions is validated via Monte-Carlo simulations. Results are generated for multiple network scenarios, and insights are extracted to assess various power control mechanisms as a function of system parameters. © 1972-2012 IEEE.

  15. Data exploration, quality control and statistical analysis of ChIP-exo/nexus experiments.

    Science.gov (United States)

    Welch, Rene; Chung, Dongjun; Grass, Jeffrey; Landick, Robert; Keles, Sündüz

    2017-09-06

    ChIP-exo/nexus experiments rely on innovative modifications of the commonly used ChIP-seq protocol for high resolution mapping of transcription factor binding sites. Although many aspects of the ChIP-exo data analysis are similar to those of ChIP-seq, these high throughput experiments pose a number of unique quality control and analysis challenges. We develop a novel statistical quality control pipeline and accompanying R/Bioconductor package, ChIPexoQual, to enable exploration and analysis of ChIP-exo and related experiments. ChIPexoQual evaluates a number of key issues including strand imbalance, library complexity, and signal enrichment of data. Assessment of these features are facilitated through diagnostic plots and summary statistics computed over regions of the genome with varying levels of coverage. We evaluated our QC pipeline with both large collections of public ChIP-exo/nexus data and multiple, new ChIP-exo datasets from Escherichia coli. ChIPexoQual analysis of these datasets resulted in guidelines for using these QC metrics across a wide range of sequencing depths and provided further insights for modelling ChIP-exo data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Statistical quality control for volumetric modulated arc therapy (VMAT) delivery using machine log data

    CERN Document Server

    Cheong, Kwang-Ho; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, Soah; Hwang, Taejin; Kim, Haeyoung; Kim, Kyoung Ju; Han, Tae Jin; Bae, Hoonsik

    2015-01-01

    The aim of this study is to set up statistical quality control for monitoring of volumetric modulated arc therapy (VMAT) delivery error using machine log data. Eclipse and Clinac iX linac with the RapidArc system (Varian Medical Systems, Palo Alto, USA) is used for delivery of the VMAT plan. During the delivery of the RapidArc fields, the machine determines the delivered motor units (MUs) and gantry angle position accuracy and the standard deviations of MU (sigma_MU; dosimetric error) and gantry angle (sigma_GA; geometric error) are displayed on the console monitor after completion of the RapidArc delivery. In the present study, first, the log data was analyzed to confirm its validity and usability; then, statistical process control (SPC) was applied to monitor the sigma_MU and sigma_GA in a timely manner for all RapidArc fields: a total of 195 arc fields for 99 patients. The sigma_MU and sigma_GA were determined twice for all fields, that is, first during the patient-specific plan QA and then again during th...

  17. Bayesian microsaccade detection

    Science.gov (United States)

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  18. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    Science.gov (United States)

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  19. Bayesian Tracking within a Feedback Sensing Environment: Estimating Interacting, Spatially Constrained Complex Dynamical Systems from Multiple Sources of Controllable Devices

    Science.gov (United States)

    2014-07-25

    currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD- MM -YYYY) 2. REPORT TYPE 3. DATES...model, we were also able to segment dances of honey bees, volatility in the IBOVESPA stock index, and formulate a target tracking application. Final...demonstrated on the standard NIST dataset • a model able to segment dances of honey bees or volatility in the IBOVESPA stock index. Latent Dictionary

  20. Performance of Statistical Control Charts with Bilateral Limits of Probability to Monitor Processes Weibull in Maintenance

    Directory of Open Access Journals (Sweden)

    Quintana Alicia Esther

    2015-01-01

    Full Text Available Manufacturing with optimal quality standards is underpinned to the high reliability of its equipment and systems, among other essential pillars. Maintenance Engineering is responsible for planning control and continuous improvement of its critical equipment by any approach, such as Six Sigma. This is nourished by numerous statistical tools highlighting, among them, statistical process control charts. While their first applications were in production, other designs have emerged to adapt to new needs as monitoring equipment and systems in the manufacturing environment. The time between failures usually fits an exponential or Weibull model. The t chart and adjusted t chart, with probabilistic control limits, are suitable alternatives to monitor the mean time between failures. Unfortunately, it is difficult to find publications of them applied to the models Weibull, very useful in contexts such as maintenance. In addition, literature limits the study of their performance to the analysis of the standard metric average run length, thus giving a partial view. The aim of this paper is to explore the performance of the t chart and adjusted t chart using three metrics, two unconventional. To do this, it incorporates the concept of lateral variability, in their forms left and right variability. Major precisions of the behavior of these charts allow to understand the conditions under which are suitable: if the main objective of monitoring lies in detecting deterioration, the t chart with adjustment is recommended. On the other hand, when the priority is to detect improvements, the t chart without adjustment is the best choice. However, the response speed of both charts is very variable from run to run.

  1. Multivariate statistical process control in product quality review assessment - A case study.

    Science.gov (United States)

    Kharbach, M; Cherrah, Y; Vander Heyden, Y; Bouklouze, A

    2017-08-07

    According to the Food and Drug Administration and the European Good Manufacturing Practices (GMP) guidelines, Annual Product Review (APR) is a mandatory requirement in GMP. It consists of evaluating a large collection of qualitative or quantitative data in order to verify the consistency of an existing process. According to the Code of Federal Regulation Part 11 (21 CFR 211.180), all finished products should be reviewed annually for the quality standards to determine the need of any change in specification or manufacturing of drug products. Conventional Statistical Process Control (SPC) evaluates the pharmaceutical production process by examining only the effect of a single factor at the time using a Shewhart's chart. It neglects to take into account the interaction between the variables. In order to overcome this issue, Multivariate Statistical Process Control (MSPC) can be used. Our case study concerns an APR assessment, where 164 historical batches containing six active ingredients, manufactured in Morocco, were collected during one year. Each batch has been checked by assaying the six active ingredients by High Performance Liquid Chromatography according to European Pharmacopoeia monographs. The data matrix was evaluated both by SPC and MSPC. The SPC indicated that all batches are under control, while the MSPC, based on Principal Component Analysis (PCA), for the data being either autoscaled or robust scaled, showed four and seven batches, respectively, out of the Hotelling T(2) 95% ellipse. Also, an improvement of the capability of the process is observed without the most extreme batches. The MSPC can be used for monitoring subtle changes in the manufacturing process during an APR assessment. Copyright © 2017 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  2. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  3. Characterization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach.

    Science.gov (United States)

    Fu, Zhibiao; Baker, Daniel; Cheng, Aili; Leighton, Julie; Appelbaum, Edward; Aon, Juan

    2016-05-01

    The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799-812, 2016.

  4. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control.

    Science.gov (United States)

    Adeleke, Jude Adekunle; Moodley, Deshendran; Rens, Gavin; Adewumi, Aderemi Oluyinka

    2017-04-09

    Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM 2 . 5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM 2 . 5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

  5. A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests.

    Science.gov (United States)

    Sassenhagen, Jona; Alday, Phillip M

    2016-11-01

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.

  6. Integrating Statistical Machine Learning in a Semantic Sensor Web for Proactive Monitoring and Control

    Directory of Open Access Journals (Sweden)

    Jude Adekunle Adeleke

    2017-04-01

    Full Text Available Proactive monitoring and control of our natural and built environments is important in various application scenarios. Semantic Sensor Web technologies have been well researched and used for environmental monitoring applications to expose sensor data for analysis in order to provide responsive actions in situations of interest. While these applications provide quick response to situations, to minimize their unwanted effects, research efforts are still necessary to provide techniques that can anticipate the future to support proactive control, such that unwanted situations can be averted altogether. This study integrates a statistical machine learning based predictive model in a Semantic Sensor Web using stream reasoning. The approach is evaluated in an indoor air quality monitoring case study. A sliding window approach that employs the Multilayer Perceptron model to predict short term PM 2 . 5 pollution situations is integrated into the proactive monitoring and control framework. Results show that the proposed approach can effectively predict short term PM 2 . 5 pollution situations: precision of up to 0.86 and sensitivity of up to 0.85 is achieved over half hour prediction horizons, making it possible for the system to warn occupants or even to autonomously avert the predicted pollution situations within the context of Semantic Sensor Web.

  7. Comparative efficacy and tolerability of duloxetine, pregabalin, and milnacipran for the treatment of fibromyalgia: a Bayesian network meta-analysis of randomized controlled trials.

    Science.gov (United States)

    Lee, Young Ho; Song, Gwan Gyu

    2016-05-01

    The aim of this study was to assess the relative efficacy and tolerability of duloxetine, pregabalin, and milnacipran at the recommended doses in patients with fibromyalgia. Randomized controlled trials (RCTs) examining the efficacy and safety of duloxetine 60 mg, pregabalin 300 mg, pregabalin 150 mg, milnacipran 200 mg, and milnacipran 100 mg compared to placebo in patients with fibromyalgia were included in this Bayesian network meta-analysis. Nine RCTs including 5140 patients met the inclusion criteria. The proportion of patients with >30 % improvement from baseline in pain was significantly higher in the duloxetine 60 mg, pregabalin 300 mg, milnacipran 100 mg, and milnacipran 200 mg groups than in the placebo group [pairwise odds ratio (OR) 2.33, 95 % credible interval (CrI) 1.50-3.67; OR 1.68, 95 % CrI 1.25-2.28; OR 1.62, 95 % CrI 1.16-2.25; and OR 1.61; 95 % CrI 1.15-2.24, respectively]. Ranking probability based on the surface under the cumulative ranking curve (SUCRA) indicated that duloxetine 60 mg had the highest probability of being the best treatment for achieving the response level (SUCRA = 0.9431), followed by pregabalin 300 mg (SUCRA = 0.6300), milnacipran 100 mg (SUCRA = 0.5680), milnacipran 200 mg (SUCRA = 0.5617), pregabalin 150 mg (SUCRA = 0.2392), and placebo (SUCRA = 0.0580). The risk of withdrawal due to adverse events was lower in the placebo group than in the pregabalin 300 mg, duloxetine 60 mg, milnacipran 100 mg, and milnacipran 200 mg groups. However, there was no significant difference in the efficacy and tolerability between the medications at the recommended doses. Duloxetine 60 mg, pregabalin 300 mg, milnacipran 100 mg, and milnacipran 200 mg were more efficacious than placebo. However, there was no significant difference in the efficacy and tolerability between the medications at the recommended doses.

  8. Statistical control chart and neural network classification for improving human fall detection

    KAUST Repository

    Harrou, Fouzi

    2017-01-05

    This paper proposes a statistical approach to detect and classify human falls based on both visual data from camera and accelerometric data captured by accelerometer. Specifically, we first use a Shewhart control chart to detect the presence of potential falls by using accelerometric data. Unfortunately, this chart cannot distinguish real falls from fall-like actions, such as lying down. To bypass this difficulty, a neural network classifier is then applied only on the detected cases through visual data. To assess the performance of the proposed method, experiments are conducted on the publicly available fall detection databases: the University of Rzeszow\\'s fall detection (URFD) dataset. Results demonstrate that the detection phase play a key role in reducing the number of sequences used as input into the neural network classifier for classification, significantly reducing computational burden and achieving better accuracy.

  9. Controlling intrinsic alignments in weak lensing statistics: The nulling and boosting techniques

    CERN Document Server

    Joachimi, B

    2010-01-01

    The intrinsic alignment of galaxies constitutes the major astrophysical source of systematic errors in surveys of weak gravitational lensing by the large-scale structure. We discuss the principles, summarise the implementation, and highlight the performance of two model-independent methods that control intrinsic alignment signals in weak lensing data: the nulling technique which eliminates intrinsic alignments to ensure unbiased constraints on cosmology, and the boosting technique which extracts intrinsic alignments and hence allows one to further study this contribution. Making only use of the characteristic dependence on redshift of the signals, both approaches are robust, but reduce the statistical power due to the similar redshift scaling of intrinsic alignment and lensing signals.

  10. Secure Wireless Communication and Optimal Power Control under Statistical Queueing Constraints

    CERN Document Server

    Qiao, Deli; Velipasalar, Senem

    2010-01-01

    In this paper, secure transmission of information over fading broadcast channels is studied in the presence of statistical queueing constraints. Effective capacity is employed as a performance metric to identify the secure throughput of the system, i.e., effective secure throughput. It is assumed that perfect channel side information (CSI) is available at both the transmitter and the receivers. Initially, the scenario in which the transmitter sends common messages to two receivers and confidential messages to one receiver is considered. For this case, effective secure throughput region, which is the region of constant arrival rates of common and confidential messages that can be supported by the buffer-constrained transmitter and fading broadcast channel, is defined. It is proven that this effective throughput region is convex. Then, the optimal power control policies that achieve the boundary points of the effective secure throughput region are investigated and an algorithm for the numerical computation of t...

  11. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  12. Statistical process control analysis for patient-specific IMRT and VMAT QA.

    Science.gov (United States)

    Sanghangthum, Taweap; Suriyapee, Sivalee; Srisatit, Somyot; Pawlicki, Todd

    2013-05-01

    This work applied statistical process control to establish the control limits of the % gamma pass of patient-specific intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) quality assurance (QA), and to evaluate the efficiency of the QA process by using the process capability index (Cpml). A total of 278 IMRT QA plans in nasopharyngeal carcinoma were measured with MapCHECK, while 159 VMAT QA plans were undertaken with ArcCHECK. Six megavolts with nine fields were used for the IMRT plan and 2.5 arcs were used to generate the VMAT plans. The gamma (3%/3 mm) criteria were used to evaluate the QA plans. The % gamma passes were plotted on a control chart. The first 50 data points were employed to calculate the control limits. The Cpml was calculated to evaluate the capability of the IMRT/VMAT QA process. The results showed higher systematic errors in IMRT QA than VMAT QA due to the more complicated setup used in IMRT QA. The variation of random errors was also larger in IMRT QA than VMAT QA because the VMAT plan has more continuity of dose distribution. The average % gamma pass was 93.7% ± 3.7% for IMRT and 96.7% ± 2.2% for VMAT. The Cpml value of IMRT QA was 1.60 and VMAT QA was 1.99, which implied that the VMAT QA process was more accurate than the IMRT QA process. Our lower control limit for % gamma pass of IMRT is 85.0%, while the limit for VMAT is 90%. Both the IMRT and VMAT QA processes are good quality because Cpml values are higher than 1.0.

  13. A bibliometric analysis of 50 years of worldwide research on statistical process control

    Directory of Open Access Journals (Sweden)

    Fabiane Letícia Lizarelli

    Full Text Available Abstract An increasing number of papers on statistical process control (SPC has emerged in the last fifty years, especially in the last fifteen years. This may be attributed to the increased global competitiveness generated by innovation and the continuous improvement of products and processes. In this sense, SPC has a fundamentally important role in quality and production systems. The research in this paper considers the context of technological improvement and innovation of products and processes to increase corporate competitiveness. There are several other statistical technics and tools for assisting continuous improvement and innovation of products and processes but, despite the limitations in their use in the improvement projects, there is growing concern about the use of SPC. A gap between the SPC technics taught in engineering courses and their practical applications to industrial problems is observed in empirical research; thus, it is important to understand what has been done and identify the trends in SPC research. The bibliometric study in this paper is proposed in this direction and uses the Web of Science (WoS database. Data analysis indicates that there was a growth rate of more than 90% in the number of publications on SPC after 1990. Our results reveal the countries where these publications have come from, the authors with the highest number of papers and their networks. Main sources of publications are also identified; it is observed that the publications of SPC papers are concentrated in some of the international research journals, not necessarily those with the major high-impact factors. Furthermore, the papers are focused on industrial engineering, operations research and management science fields. The most common term found in the papers was cumulative sum control charts, but new topics have emerged and have been researched in the past ten years, such as multivariate methods for process monitoring and nonparametric methods.

  14. Statistical quality control for volumetric modulated arc therapy (VMAT) delivery by using the machine's log data

    Science.gov (United States)

    Cheong, Kwang-Ho; Lee, Me-Yeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, Soah; Hwang, Taejin; Kim, Haeyoung; Kim, Kyoung Ju; Han, Tae Jin; Bae, Hoonsik

    2015-07-01

    The aim of this study is to set up statistical quality control for monitoring the volumetric modulated arc therapy (VMAT) delivery error by using the machine's log data. Eclipse and a Clinac iX linac with the RapidArc system (Varian Medical Systems, Palo Alto, USA) are used for delivery of the VMAT plan. During the delivery of the RapidArc fields, the machine determines the delivered monitor units (MUs) and the gantry angle's position accuracy and the standard deviations of the MU ( σMU: dosimetric error) and the gantry angle ( σGA: geometric error) are displayed on the console monitor after completion of the RapidArc delivery. In the present study, first, the log data were analyzed to confirm its validity and usability; then, statistical process control (SPC) was applied to monitor the σMU and the σGA in a timely manner for all RapidArc fields: a total of 195 arc fields for 99 patients. The MU and the GA were determined twice for all fields, that is, first during the patient-specific plan QA and then again during the first treatment. The sMU and the σGA time series were quite stable irrespective of the treatment site; however, the sGA strongly depended on the gantry's rotation speed. The σGA of the RapidArc delivery for stereotactic body radiation therapy (SBRT) was smaller than that for the typical VMAT. Therefore, SPC was applied for SBRT cases and general cases respectively. Moreover, the accuracy of the potential meter of the gantry rotation is important because the σGA can change dramatically due to its condition. By applying SPC to the σMU and σGA, we could monitor the delivery error efficiently. However, the upper and the lower limits of SPC need to be determined carefully with full knowledge of the machine and log data.

  15. Bayesian Face Sketch Synthesis.

    Science.gov (United States)

    Wang, Nannan; Gao, Xinbo; Sun, Leiyu; Li, Jie

    2017-03-01

    Exemplar-based face sketch synthesis has been widely applied to both digital entertainment and law enforcement. In this paper, we propose a Bayesian framework for face sketch synthesis, which provides a systematic interpretation for understanding the common properties and intrinsic difference in different methods from the perspective of probabilistic graphical models. The proposed Bayesian framework consists of two parts: the neighbor selection model and the weight computation model. Within the proposed framework, we further propose a Bayesian face sketch synthesis method. The essential rationale behind the proposed Bayesian method is that we take the spatial neighboring constraint between adjacent image patches into consideration for both aforementioned models, while the state-of-the-art methods neglect the constraint either in the neighbor selection model or in the weight computation model. Extensive experiments on the Chinese University of Hong Kong face sketch database demonstrate that the proposed Bayesian method could achieve superior performance compared with the state-of-the-art methods in terms of both subjective perceptions and objective evaluations.

  16. Editorial: Bayesian benefits for child psychology and psychiatry researchers.

    Science.gov (United States)

    Oldehinkel, Albertine J

    2016-09-01

    For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field.

  17. Bayesian homeopathy: talking normal again.

    Science.gov (United States)

    Rutten, A L B

    2007-04-01

    Homeopathy has a communication problem: important homeopathic concepts are not understood by conventional colleagues. Homeopathic terminology seems to be comprehensible only after practical experience of homeopathy. The main problem lies in different handling of diagnosis. In conventional medicine diagnosis is the starting point for randomised controlled trials to determine the effect of treatment. In homeopathy diagnosis is combined with other symptoms and personal traits of the patient to guide treatment and predict response. Broadening our scope to include diagnostic as well as treatment research opens the possibility of multi factorial reasoning. Adopting Bayesian methodology opens the possibility of investigating homeopathy in everyday practice and of describing some aspects of homeopathy in conventional terms.

  18. A Gaussian Mixed Model for Learning Discrete Bayesian Networks.

    Science.gov (United States)

    Balov, Nikolay

    2011-02-01

    In this paper we address the problem of learning discrete Bayesian networks from noisy data. Considered is a graphical model based on mixture of Gaussian distributions with categorical mixing structure coming from a discrete Bayesian network. The network learning is formulated as a Maximum Likelihood estimation problem and performed by employing an EM algorithm. The proposed approach is relevant to a variety of statistical problems for which Bayesian network models are suitable - from simple regression analysis to learning gene/protein regulatory networks from microarray data.

  19. Mocapy++ - a toolkit for inference and learning in dynamic Bayesian networks

    DEFF Research Database (Denmark)

    Paluszewski, Martin; Hamelryck, Thomas Wim

    2010-01-01

    Background Mocapy++ is a toolkit for parameter learning and inference in dynamic Bayesian networks (DBNs). It supports a wide range of DBN architectures and probability distributions, including distributions from directional statistics (the statistics of angles, directions and orientations...

  20. An Efficient Bayesian Iterative Method for Solving Linear Systems

    Institute of Scientific and Technical Information of China (English)

    Deng DING; Kin Sio FONG; Ka Hou CHAN

    2012-01-01

    This paper concerns with the statistical methods for solving general linear systems.After a brief review of Bayesian perspective for inverse problems,a new and efficient iterative method for general linear systems from a Bayesian perspective is proposed.The convergence of this iterative method is proved,and the corresponding error analysis is studied.Finally,numerical experiments are given to support the efficiency of this iterative method,and some conclusions are obtained.

  1. BAYESIAN NETWORKS FOR SUB-GROUPS OF MULTIPLE SCLEROSIS

    OpenAIRE

    2013-01-01

    In this study, patients with multiple sclerosis "sub-groups" characteristics in relation to detection of a statistically (SPSS) and are provided in the Bayesian network. The main objective of this study, regarding the appearance of MRI lesions in patients with Multiple Sclerosis information and / or EDSS scores to investigate the possible attack of multiple sclerosis subgroups. Bayesian networks, reflects the level of sub-groups in multiple sclerosis patients. Analyzes were conducted...

  2. A Bayesian Belief Network Approach to Explore Alternative Decisions for Sediment Control and water Storage Capacity at Lago Lucchetti, Puerto Rico

    Science.gov (United States)

    A Bayesian belief network (BBN) was developed to characterize the effects of sediment accumulation on the water storage capacity of Lago Lucchetti (located in southwest Puerto Rico) and to forecast the life expectancy (usefulness) of the reservoir under different management scena...

  3. Bayesian Visual Odometry

    Science.gov (United States)

    Center, Julian L.; Knuth, Kevin H.

    2011-03-01

    Visual odometry refers to tracking the motion of a body using an onboard vision system. Practical visual odometry systems combine the complementary accuracy characteristics of vision and inertial measurement units. The Mars Exploration Rovers, Spirit and Opportunity, used this type of visual odometry. The visual odometry algorithms in Spirit and Opportunity were based on Bayesian methods, but a number of simplifying approximations were needed to deal with onboard computer limitations. Furthermore, the allowable motion of the rover had to be severely limited so that computations could keep up. Recent advances in computer technology make it feasible to implement a fully Bayesian approach to visual odometry. This approach combines dense stereo vision, dense optical flow, and inertial measurements. As with all true Bayesian methods, it also determines error bars for all estimates. This approach also offers the possibility of using Micro-Electro Mechanical Systems (MEMS) inertial components, which are more economical, weigh less, and consume less power than conventional inertial components.

  4. Hybrid Batch Bayesian Optimization

    CERN Document Server

    Azimi, Javad; Fern, Xiaoli

    2012-01-01

    Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...

  5. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  6. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  7. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  8. Advances in Statistical Control, Algebraic Systems Theory, and Dynamic Systems Characteristics A Tribute to Michael K Sain

    CERN Document Server

    Won, Chang-Hee; Michel, Anthony N

    2008-01-01

    This volume - dedicated to Michael K. Sain on the occasion of his seventieth birthday - is a collection of chapters covering recent advances in stochastic optimal control theory and algebraic systems theory. Written by experts in their respective fields, the chapters are thematically organized into four parts: Part I focuses on statistical control theory, where the cost function is viewed as a random variable and performance is shaped through cost cumulants. In this respect, statistical control generalizes linear-quadratic-Gaussian and H-infinity control. Part II addresses algebraic systems th

  9. Statistical Methods for Quality Control of Steel Coils Manufacturing Process using Generalized Linear Models

    Science.gov (United States)

    García-Díaz, J. Carlos

    2009-11-01

    Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.

  10. Controlling Time-Dependent Confounding by Health Status and Frailty: Restriction Versus Statistical Adjustment.

    Science.gov (United States)

    McGrath, Leah J; Ellis, Alan R; Brookhart, M Alan

    2015-07-01

    Nonexperimental studies of preventive interventions are often biased because of the healthy-user effect and, in frail populations, because of confounding by functional status. Bias is evident when estimating influenza vaccine effectiveness, even after adjustment for claims-based indicators of illness. We explored bias reduction methods while estimating vaccine effectiveness in a cohort of adult hemodialysis patients. Using the United States Renal Data System and linked data from a commercial dialysis provider, we estimated vaccine effectiveness using a Cox proportional hazards marginal structural model of all-cause mortality before and during 3 influenza seasons in 2005/2006 through 2007/2008. To improve confounding control, we added frailty indicators to the model, measured time-varying confounders at different time intervals, and restricted the sample in multiple ways. Crude and baseline-adjusted marginal structural models remained strongly biased. Restricting to a healthier population removed some unmeasured confounding; however, this reduced the sample size, resulting in wide confidence intervals. We estimated an influenza vaccine effectiveness of 9% (hazard ratio = 0.91, 95% confidence interval: 0.72, 1.15) when bias was minimized through cohort restriction. In this study, the healthy-user bias could not be controlled through statistical adjustment; however, sample restriction reduced much of the bias.

  11. Probabilistic Inferences in Bayesian Networks

    OpenAIRE

    Ding, Jianguo

    2010-01-01

    This chapter summarizes the popular inferences methods in Bayesian networks. The results demonstrates that the evidence can propagated across the Bayesian networks by any links, whatever it is forward or backward or intercausal style. The belief updating of Bayesian networks can be obtained by various available inference techniques. Theoretically, exact inferences in Bayesian networks is feasible and manageable. However, the computing and inference is NP-hard. That means, in applications, in ...

  12. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  13. QUALITY IMPROVEMENT USING STATISTICAL PROCESS CONTROL TOOLS IN GLASS BOTTLES MANUFACTURING COMPANY

    Directory of Open Access Journals (Sweden)

    Yonatan Mengesha Awaj

    2013-03-01

    Full Text Available In order to survive in a competitive market, improving quality and productivity of product or process is a must for any company. This study is about to apply the statistical process control (SPC tools in the production processing line and on final product in order to reduce defects by identifying where the highest waste is occur at and to give suggestion for improvement. The approach used in this study is direct observation, thorough examination of production process lines, brain storming session, fishbone diagram, and information has been collected from potential customers and company's workers through interview and questionnaire, Pareto chart/analysis and control chart (p-chart was constructed. It has been found that the company has many problems; specifically there is high rejection or waste in the production processing line. The highest waste occurs in melting process line which causes loss due to trickle and in the forming process line which causes loss due to defective product rejection. The vital few problems were identified, it was found that the blisters, double seam, stone, pressure failure and overweight are the vital few problems. The principal aim of the study is to create awareness to quality team how to use SPC tools in the problem analysis, especially to train quality team on how to held an effective brainstorming session, and exploit these data in cause-and-effect diagram construction, Pareto analysis and control chart construction. The major causes of non-conformities and root causes of the quality problems were specified, and possible remedies were proposed. Although the company has many constraints to implement all suggestion for improvement within short period of time, the company recognized that the suggestion will provide significant productivity improvement in the long run.

  14. Bayesian Approach to Inverse Problems

    CERN Document Server

    2008-01-01

    Many scientific, medical or engineering problems raise the issue of recovering some physical quantities from indirect measurements; for instance, detecting or quantifying flaws or cracks within a material from acoustic or electromagnetic measurements at its surface is an essential problem of non-destructive evaluation. The concept of inverse problems precisely originates from the idea of inverting the laws of physics to recover a quantity of interest from measurable data.Unfortunately, most inverse problems are ill-posed, which means that precise and stable solutions are not easy to devise. Regularization is the key concept to solve inverse problems.The goal of this book is to deal with inverse problems and regularized solutions using the Bayesian statistical tools, with a particular view to signal and image estimation

  15. Bayesian Inference in Queueing Networks

    CERN Document Server

    Sutton, Charles

    2010-01-01

    Modern Web services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where one queue models each of the individual computers in the system. A key challenge is that the data is incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model...

  16. Decision generation tools and Bayesian inference

    Science.gov (United States)

    Jannson, Tomasz; Wang, Wenjian; Forrester, Thomas; Kostrzewski, Andrew; Veeris, Christian; Nielsen, Thomas

    2014-05-01

    Digital Decision Generation (DDG) tools are important software sub-systems of Command and Control (C2) systems and technologies. In this paper, we present a special type of DDGs based on Bayesian Inference, related to adverse (hostile) networks, including such important applications as terrorism-related networks and organized crime ones.

  17. Bayesian methods for hackers probabilistic programming and Bayesian inference

    CERN Document Server

    Davidson-Pilon, Cameron

    2016-01-01

    Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...

  18. Bayesian networks in neuroscience: a survey.

    Science.gov (United States)

    Bielza, Concha; Larrañaga, Pedro

    2014-01-01

    Bayesian networks are a type of probabilistic graphical models lie at the intersection between statistics and machine learning. They have been shown to be powerful tools to encode dependence relationships among the variables of a domain under uncertainty. Thanks to their generality, Bayesian networks can accommodate continuous and discrete variables, as well as temporal processes. In this paper we review Bayesian networks and how they can be learned automatically from data by means of structure learning algorithms. Also, we examine how a user can take advantage of these networks for reasoning by exact or approximate inference algorithms that propagate the given evidence through the graphical structure. Despite their applicability in many fields, they have been little used in neuroscience, where they have focused on specific problems, like functional connectivity analysis from neuroimaging data. Here we survey key research in neuroscience where Bayesian networks have been used with different aims: discover associations between variables, perform probabilistic reasoning over the model, and classify new observations with and without supervision. The networks are learned from data of any kind-morphological, electrophysiological, -omics and neuroimaging-, thereby broadening the scope-molecular, cellular, structural, functional, cognitive and medical- of the brain aspects to be studied.

  19. Varying prior information in Bayesian inversion

    Science.gov (United States)

    Walker, Matthew; Curtis, Andrew

    2014-06-01

    Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions.

  20. Adaptive surrogate modeling for response surface approximations with application to bayesian inference

    KAUST Repository

    Prudhomme, Serge

    2015-09-17

    Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.

  1. Statistical methods for launch vehicle guidance, navigation, and control (GN&C) system design and analysis

    Science.gov (United States)

    Rose, Michael Benjamin

    A novel trajectory and attitude control and navigation analysis tool for powered ascent is developed. The tool is capable of rapid trade-space analysis and is designed to ultimately reduce turnaround time for launch vehicle design, mission planning, and redesign work. It is streamlined to quickly determine trajectory and attitude control dispersions, propellant dispersions, orbit insertion dispersions, and navigation errors and their sensitivities to sensor errors, actuator execution uncertainties, and random disturbances. The tool is developed by applying both Monte Carlo and linear covariance analysis techniques to a closed-loop, launch vehicle guidance, navigation, and control (GN&C) system. The nonlinear dynamics and flight GN&C software models of a closed-loop, six-degree-of-freedom (6-DOF), Monte Carlo simulation are formulated and developed. The nominal reference trajectory (NRT) for the proposed lunar ascent trajectory is defined and generated. The Monte Carlo truth models and GN&C algorithms are linearized about the NRT, the linear covariance equations are formulated, and the linear covariance simulation is developed. The performance of the launch vehicle GN&C system is evaluated using both Monte Carlo and linear covariance techniques and their trajectory and attitude control dispersion, propellant dispersion, orbit insertion dispersion, and navigation error results are validated and compared. Statistical results from linear covariance analysis are generally within 10% of Monte Carlo results, and in most cases the differences are less than 5%. This is an excellent result given the many complex nonlinearities that are embedded in the ascent GN&C problem. Moreover, the real value of this tool lies in its speed, where the linear covariance simulation is 1036.62 times faster than the Monte Carlo simulation. Although the application and results presented are for a lunar, single-stage-to-orbit (SSTO), ascent vehicle, the tools, techniques, and mathematical

  2. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  3. Selected Remarks about Computer Processing in Terms of Flow Control and Statistical Mechanics

    Directory of Open Access Journals (Sweden)

    Dominik Strzałka

    2016-03-01

    Full Text Available Despite the fact that much has been said about processing in computer science, it seems that there is still much to do. A classical approach assumes that the computations done by computers are a kind of mathematical operation (calculations of functions values and have no special relations to energy transformation and flow. However, there is a possibility to get a new view on selected topics, and as a special case, the sorting problem is presented; we know many different sorting algorithms, including those that have complexity equal to O(n lg(n , which means that this problem is algorithmically closed, but it is also possible to focus on the problem of sorting in terms of flow control, entropy and statistical mechanics. This is done in relation to the existing definitions of sorting, connections between sorting and ordering and some important aspects of computer processing understood as a flow that are not taken into account in many theoretical considerations in computer science. The proposed new view is an attempt to change the paradigm in the description of algorithms’ performance by computational complexity and processing, taking into account the existing references between the idea of Turing machines and their physical implementations. This proposal can be expressed as a physics of computer processing; a reference point to further analysis of algorithmic and interactive processing in computer systems.

  4. Statistical discrimination of steroid profiles in doping control with support vector machines.

    Science.gov (United States)

    Van Renterghem, Pieter; Sottas, Pierre-Edouard; Saugy, Martial; Van Eenoo, Peter

    2013-03-20

    Due to their performance enhancing properties, use of anabolic steroids (e.g. testosterone, nandrolone, etc.) is banned in elite sports. Therefore, doping control laboratories accredited by the World Anti-Doping Agency (WADA) screen among others for these prohibited substances in urine. It is particularly challenging to detect misuse with naturally occurring anabolic steroids such as testosterone (T), which is a popular ergogenic agent in sports and society. To screen for misuse with these compounds, drug testing laboratories monitor the urinary concentrations of endogenous steroid metabolites and their ratios, which constitute the steroid profile and compare them with reference ranges to detect unnaturally high values. However, the interpretation of the steroid profile is difficult due to large inter-individual variances, various confounding factors and different endogenous steroids marketed that influence the steroid profile in various ways. A support vector machine (SVM) algorithm was developed to statistically evaluate urinary steroid profiles composed of an extended range of steroid profile metabolites. This model makes the interpretation of the analytical data in the quest for deviating steroid profiles feasible and shows its versatility towards different kinds of misused endogenous steroids. The SVM model outperforms the current biomarkers with respect to detection sensitivity and accuracy, particularly when it is coupled to individual data as stored in the Athlete Biological Passport.

  5. Use of graphical statistical process control tools to monitor and improve outcomes in cardiac surgery.

    Science.gov (United States)

    Smith, Ian R; Garlick, Bruce; Gardner, Michael A; Brighouse, Russell D; Foster, Kelley A; Rivers, John T

    2013-02-01

    Graphical Statistical Process Control (SPC) tools have been shown to promptly identify significant variations in clinical outcomes in a range of health care settings. We explored the application of these techniques to qualitatively inform the routine cardiac surgical morbidity and mortality (M&M) review process at a single site. Baseline clinical and procedural data relating to 4774 consecutive cardiac surgical procedures, performed between the 1st January 2003 and the 30th April 2011, were retrospectively evaluated. A range of appropriate performance measures and benchmarks were developed and evaluated using a combination of CUmulative SUM (CUSUM) charts, Exponentially Weighted Moving Average (EWMA) charts and Funnel Plots. Charts have been discussed at the unit's routine M&M meetings. Risk adjustment (RA) based on EuroSCORE has been incorporated into the charts to improve performance. Discrete and aggregated measures, including Blood Product/Reoperation, major acute post-procedural complications and Length of Stay/Readmissiontools facilitate near "real-time" performance monitoring allowing early detection and intervention in altered performance. Careful interpretation of charts for group and individual operators has proven helpful in detecting and differentiating systemic vs. individual variation. Copyright © 2012 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  6. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  7. Feasibility study of using statistical process control to customized quality assurance in proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Rah, Jeong-Eun; Oh, Do Hoon [Department of Radiation Oncology, Myongji Hospital, Goyang 412-270 (Korea, Republic of); Shin, Dongho; Kim, Tae Hyun [Proton Therapy Center, National Cancer Center, Goyang 410-769 (Korea, Republic of); Kim, Gwe-Ya, E-mail: gweyakim@gmail.com [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, California 92093 (United States)

    2014-09-15

    Purpose: To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. Methods: The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. Results: The authors established a customized tolerance level of ±2% for D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors’ analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. Conclusions: SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.

  8. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure.

    Directory of Open Access Journals (Sweden)

    Aristides T Hatjimihail

    Full Text Available BACKGROUND: An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. METHODOLOGY/PRINCIPAL FINDINGS: Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. CONCLUSIONS/SIGNIFICANCE: It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.

  9. The role of induction and adjuvant chemotherapy in combination with concurrent chemoradiotherapy for nasopharyngeal cancer: a Bayesian network meta-analysis of published randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Yu HL

    2016-01-01

    Full Text Available Hongliang Yu,1,* Dayong Gu,1,* Xia He,1 Xianshu Gao,2 Xiuhua Bian1 1Department of Radiation Oncology, Jiangsu Cancer Hospital affiliated with Nanjing Medical University, Nanjing, 2Department of Radiation Oncology, Peking University First Hospital, Peking University, Beijing, People’s Republic of China *These authors contributed equally to this work Abstract: Whether the addition of induction chemotherapy (IC or adjuvant chemotherapy (AC to concurrent chemoradiotherapy (CCRT is superior to CCRT alone for locally advanced nasopharyngeal cancer is unknown. A Bayesian network meta-analysis was performed to investigate the efficacy of CCRT, IC + CCRT, and CCRT + AC on locally advanced nasopharyngeal cancer. The overall survival (OS with hazard ratios (HRs and locoregional recurrence rates (LRRs and distant metastasis rates (DMRs with risk ratios (RRs were investigated. After a comprehensive database search, eleven studies involving 2,626 assigned patients were included in this network meta-analysis. Compared with CCRT alone, IC + CCRT resulted in no significant improvement in OS or LRR and a marginal improvement in DMR (OS: HR =0.67, 95% credible interval (CrI 0.32–1.18; LRR: RR =1.79, 95% CrI 0.80–3.51; DMR: RR =1.79, 95% CrI 0.24–1.04 and CCRT + AC exhibited no beneficial effects on any of the endpoints of OS, LRR, or DMR (OS: HR =0.99, 95% CrI 0.64–1.43; LRR: RR =0.78, 95% CrI 0.43–1.32; DMR: RR =0.85, 95% CrI 0.57–1.24. As a conclusion, for locally advanced nasopharyngeal cancer, no significant differences in the treatment efficacies of CCRT, IC + CCRT, and CCRT + AC were found, with the exception of a marginally significant improvement in distant control observed following IC + CCRT compared with CCRT alone. Keywords: concurrent chemotherapy, induction chemotherapy, adjuvant chemotherapy, radiotherapy, nasopharyngeal cancer, network meta-analysis

  10. Sub-thermal to super-thermal light statistics from a disordered lattice via deterministic control of excitation symmetry

    CERN Document Server

    Kondakci, H E; Abouraddy, A F; Christodoulides, D N; Saleh, B E A

    2016-01-01

    Monochromatic coherent light traversing a disordered photonic medium evolves into a random field whose statistics are dictated by the disorder level. Here we demonstrate experimentally that light statistics can be deterministically tuned in certain disordered lattices, even when the disorder level is held fixed, by controllably breaking the excitation symmetry of the lattice modes. We exploit a lattice endowed with disorder-immune chiral symmetry in which the eigenmodes come in skew-symmetric pairs. If a single lattice site is excited, a "photonic thermalization gap" emerges: the realm of sub-thermal light statistics is inaccessible regardless of the disorder level. However, by exciting two sites with a variable relative phase, as in a traditional two-path interferometer, the chiral symmetry is judiciously broken and interferometric control over the light statistics is exercised, spanning sub-thermal and super-thermal regimes. These results may help develop novel incoherent lighting sources from coherent lase...

  11. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, C.; Plant, N.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70-90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale. ?? 2010.

  12. Bayesian decision-theoretic group sequential clinical trial design based on a quadratic loss function: a frequentist evaluation.

    Science.gov (United States)

    Lewis, Roger J; Lipsky, Ari M; Berry, Donald A

    2007-01-01

    The decision to terminate a controlled clinical trial at the time of an interim analysis is perhaps best made by weighing the value of the likely additional information to be gained if further subjects are enrolled against the various costs of that further enrollment. The most commonly used statistical plans for interim analysis (eg, O'Brien-Fleming), however, are based on a frequentist approach that makes no such comparison. A two-armed Bayesian decision-theoretic clinical trial design is developed for a disease with two possible outcomes, incorporating a quadratic decision loss function and using backward induction to quantify the cost of future enrollment. Monte Carlo simulation is used to compare frequentist error rates and mean required sample sizes for these Bayesian designs with the two-tailed frequentist group-sequential designs of, O'Brien-Fleming and Pocock. When the terminal decision loss function is chosen to yield typical frequentist error rates, the mean sample sizes required by the Bayesian designs are smaller than those of the corresponding O'Brien-Fleming frequentist designs, largely due to the more frequent interim analyses typically used with the Bayesian designs and the ability of the Bayesian designs to terminate early and conclude equivalence. Adding stochastic curtailment to the frequentist designs and using the same number of interim analyses results in largely equivalent trials. An example of a Bayesian design for the data safety monitoring of a clinical trial is given. Our design assumes independence of the probabilities of success in the two trial arms. Additionally, we have chosen non-informative priors and selected loss functions to produce trials with appealing frequentist error rates, rather than choosing priors that reflect realistic prior information and loss functions that reflect true costs. Our Bayesian designs allow interpretation of the final results along either Bayesian or frequentist lines. For the Bayesian, they minimize

  13. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  14. A Bayesian approach to particle identification in ALICE

    CERN Document Server

    CERN. Geneva

    2016-01-01

    Among the LHC experiments, ALICE has unique particle identification (PID) capabilities exploiting different types of detectors. During Run 1, a Bayesian approach to PID was developed and intensively tested. It facilitates the combination of information from different sub-systems. The adopted methodology and formalism as well as the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE will be reviewed. Results are presented with PID performed via measurements of specific energy loss (dE/dx) and time-of-flight using information from the TPC and TOF detectors, respectively. Methods to extract priors from data and to compare PID efficiencies and misidentification probabilities in data and Monte Carlo using high-purity samples of identified particles will be presented. Bayesian PID results were found consistent with previous measurements published by ALICE. The Bayesian PID approach gives a higher signal-to-background ratio and a similar or larger statist...

  15. Quantum-Like Representation of Non-Bayesian Inference

    Science.gov (United States)

    Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.

    2013-01-01

    This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.

  16. Robust Bayesian Regularized Estimation Based on t Regression Model

    Directory of Open Access Journals (Sweden)

    Zean Li

    2015-01-01

    Full Text Available The t distribution is a useful extension of the normal distribution, which can be used for statistical modeling of data sets with heavy tails, and provides robust estimation. In this paper, in view of the advantages of Bayesian analysis, we propose a new robust coefficient estimation and variable selection method based on Bayesian adaptive Lasso t regression. A Gibbs sampler is developed based on the Bayesian hierarchical model framework, where we treat the t distribution as a mixture of normal and gamma distributions and put different penalization parameters for different regression coefficients. We also consider the Bayesian t regression with adaptive group Lasso and obtain the Gibbs sampler from the posterior distributions. Both simulation studies and real data example show that our method performs well compared with other existing methods when the error distribution has heavy tails and/or outliers.

  17. Bayesian Case-deletion Model Complexity and Information Criterion.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Chen, Qingxia

    2014-10-01

    We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example.

  18. Bayesian multimodel inference for dose-response studies

    Science.gov (United States)

    Link, W.A.; Albers, P.H.

    2007-01-01

    Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.

  19. Toward evidence-based medical statistics : a Bayesian analysis of double-blind placebo-controlled antidepressant trials in the treatment of anxiety disorders

    NARCIS (Netherlands)

    Monden, Rei; de Vos, Stijn; Morey, Richard; Wagenmakers, Eric-Jan; de Jonge, Peter; Roest, Annelieke M

    2016-01-01

    The Food and Drug Administration (FDA) uses a p < 0.05 null-hypothesis significance testing framework to evaluate "substantial evidence" for drug efficacy. This framework only allows dichotomous conclusions and does not quantify the strength of evidence supporting efficacy. The efficacy of FDA-appro

  20. A Bayesian method for pulsar template generation

    CERN Document Server

    Imgrund, M; Kramer, M; Lesch, H

    2015-01-01

    Extracting Times of Arrival from pulsar radio signals depends on the knowledge of the pulsars pulse profile and how this template is generated. We examine pulsar template generation with Bayesian methods. We will contrast the classical generation mechanism of averaging intensity profiles with a new approach based on Bayesian inference. We introduce the Bayesian measurement model imposed and derive the algorithm to reconstruct a "statistical template" out of noisy data. The properties of these "statistical templates" are analysed with simulated and real measurement data from PSR B1133+16. We explain how to put this new form of template to use in analysing secondary parameters of interest and give various examples: We implement a nonlinear filter for determining ToAs of pulsars. Applying this method to data from PSR J1713+0747 we derive ToAs self consistently, meaning all epochs were timed and we used the same epochs for template generation. While the average template contains fluctuations and noise as unavoida...

  1. Probability, statistics, and computational science.

    Science.gov (United States)

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  2. Bayesian community detection

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel N

    2012-01-01

    Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....

  3. Advances in statistical monitoring of complex multivariate processes with applications in industrial process control

    CERN Document Server

    Kruger, Uwe

    2012-01-01

    The development and application of multivariate statistical techniques in process monitoring has gained substantial interest over the past two decades in academia and industry alike.  Initially developed for monitoring and fault diagnosis in complex systems, such techniques have been refined and applied in various engineering areas, for example mechanical and manufacturing, chemical, electrical and electronic, and power engineering.  The recipe for the tremendous interest in multivariate statistical techniques lies in its simplicity and adaptability for developing monitoring applica

  4. Point and Interval Estimation on the Degree and the Angle of Polarization. A Bayesian approach

    CERN Document Server

    Maier, Daniel; Santangelo, Andrea

    2014-01-01

    Linear polarization measurements provide access to two quantities, the degree (DOP) and the angle of polarization (AOP). The aim of this work is to give a complete and concise overview of how to analyze polarimetric measurements. We review interval estimations for the DOP with a frequentist and a Bayesian approach. Point estimations for the DOP and interval estimations for the AOP are further investigated with a Bayesian approach to match observational needs. Point and interval estimations are calculated numerically for frequentist and Bayesian statistics. Monte Carlo simulations are performed to clarify the meaning of the calculations. Under observational conditions, the true DOP and AOP are unknown, so that classical statistical considerations - based on true values - are not directly usable. In contrast, Bayesian statistics handles unknown true values very well and produces point and interval estimations for DOP and AOP, directly. Using a Bayesian approach, we show how to choose DOP point estimations based...

  5. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...... in a Matlab toolbox, is demonstrated for non-negative decompositions and compared with non-negative matrix factorization....

  6. On Bayesian analysis of on–off measurements

    Energy Technology Data Exchange (ETDEWEB)

    Nosek, Dalibor, E-mail: nosek@ipnp.troja.mff.cuni.cz [Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic); Nosková, Jana [Czech Technical University, Faculty of Civil Engineering, Prague (Czech Republic)

    2016-06-01

    We propose an analytical solution to the on–off problem within the framework of Bayesian statistics. Both the statistical significance for the discovery of new phenomena and credible intervals on model parameters are presented in a consistent way. We use a large enough family of prior distributions of relevant parameters. The proposed analysis is designed to provide Bayesian solutions that can be used for any number of observed on–off events, including zero. The procedure is checked using Monte Carlo simulations. The usefulness of the method is demonstrated on examples from γ-ray astronomy.

  7. On Bayesian analysis of on-off measurements

    CERN Document Server

    Nosek, Dalibor

    2016-01-01

    We propose an analytical solution to the on-off problem within the framework of Bayesian statistics. Both the statistical significance for the discovery of new phenomena and credible intervals on model parameters are presented in a consistent way. We use a large enough family of prior distributions of relevant parameters. The proposed analysis is designed to provide Bayesian solutions that can be used for any number of observed on-off events, including zero. The procedure is checked using Monte Carlo simulations. The usefulness of the method is demonstrated on examples from gamma-ray astronomy.

  8. Bayesian parameter estimation for effective field theories

    CERN Document Server

    Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A

    2015-01-01

    We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  9. Bayesian data analysis tools for atomic physics

    CERN Document Server

    Trassinelli, Martino

    2016-01-01

    We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested fit to calculate the different probability distrib...

  10. Quantum-like Representation of Bayesian Updating

    Science.gov (United States)

    Asano, Masanari; Ohya, Masanori; Tanaka, Yoshiharu; Khrennikov, Andrei; Basieva, Irina

    2011-03-01

    Recently, applications of quantum mechanics to coginitive psychology have been discussed, see [1]-[11]. It was known that statistical data obtained in some experiments of cognitive psychology cannot be described by classical probability model (Kolmogorov's model) [12]-[15]. Quantum probability is one of the most advanced mathematical models for non-classical probability. In the paper of [11], we proposed a quantum-like model describing decision-making process in a two-player game, where we used the generalized quantum formalism based on lifting of density operators [16]. In this paper, we discuss the quantum-like representation of Bayesian inference, which has been used to calculate probabilities for decision making under uncertainty. The uncertainty is described in the form of quantum superposition, and Bayesian updating is explained as a reduction of state by quantum measurement.

  11. Bayesian methods of astronomical source extraction

    CERN Document Server

    Oliver, R S S S

    2005-01-01

    We present two new source extraction methods, based on the Bayesian statistical formalism. The first is a source detection filter, able to simultaneously detect point sources and estimate the image background. The second is an advanced photometry technique, which measures the flux, position (to sub-pixel accuracy), local background and point spread function of a previously-detected source. In both cases, we use the Bayesian Information Criterion (BIC) to compare the relative likelihood of different models. We apply the source detection filter to simulated Herschel-SPIRE data and show the filter's ability to both detect point sources and also simultaneously estimate the image background. We use the photometry method to analyse a simple simulated image containing a source of unknown flux, position and point spread function; we not only accurately measure these parameters, but also determine their uncertainties (using Markov-Chain Monte Carlo sampling). We also characterise the nature of the source (for example,...

  12. BONNSAI: correlated stellar observables in Bayesian methods

    CERN Document Server

    Schneider, F R N; Fossati, L; Langer, N; de Koter, A

    2016-01-01

    In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code BONNSAI by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounte...

  13. Bayesian parameter estimation for effective field theories

    Science.gov (United States)

    Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.

    2016-07-01

    We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.

  14. Bayesian analysis of cosmic structures

    CERN Document Server

    Kitaura, Francisco-Shu

    2011-01-01

    We revise the Bayesian inference steps required to analyse the cosmological large-scale structure. Here we make special emphasis in the complications which arise due to the non-Gaussian character of the galaxy and matter distribution. In particular we investigate the advantages and limitations of the Poisson-lognormal model and discuss how to extend this work. With the lognormal prior using the Hamiltonian sampling technique and on scales of about 4 h^{-1} Mpc we find that the over-dense regions are excellent reconstructed, however, under-dense regions (void statistics) are quantitatively poorly recovered. Contrary to the maximum a posteriori (MAP) solution which was shown to over-estimate the density in the under-dense regions we obtain lower densities than in N-body simulations. This is due to the fact that the MAP solution is conservative whereas the full posterior yields samples which are consistent with the prior statistics. The lognormal prior is not able to capture the full non-linear regime at scales ...

  15. Bayesian Methods for Analyzing Structural Equation Models with Covariates, Interaction, and Quadratic Latent Variables

    Science.gov (United States)

    Lee, Sik-Yum; Song, Xin-Yuan; Tang, Nian-Sheng

    2007-01-01

    The analysis of interaction among latent variables has received much attention. This article introduces a Bayesian approach to analyze a general structural equation model that accommodates the general nonlinear terms of latent variables and covariates. This approach produces a Bayesian estimate that has the same statistical optimal properties as a…

  16. Statistically based sustainable re-design of stormwater overflow control systems in urban catchments

    Science.gov (United States)

    Ganora, Daniele; Isacco, Silvia; Claps, Pierluigi

    2017-04-01

    Control and reduction of pollution from stormwater overflow is a major concern for municipalities to manage the quality of the receiving water bodies according to the Framework Water Directive 2000/60/CE. In this regard, assessment studies of the potential pollution load from sewer networks recognize the need for adaptation and upgrade of existing drainage systems, which can be achieved with either traditional water works (detention tanks, increase of wastewater treatment plant capacity, etc.) or even Nature-based solutions (constructed wetlands, restored floodplains, etc.) sometimes used in combination. Nature-based solutions are recently receiving consistent attentions as they are able to enhance urban and degraded environments being, in the same time, more resilient and adaptable to climatic and anthropic changes than most traditional engineering works. On the other hand, restoration of the urban environment using natural absorbing surfaces requires diffuse interventions, high costs and a considerable amount of time. In this work we investigate how simple, economically-sustainable and quick solutions to the problem at hand can be addressed by changes in the management rules when pumping stations play a role in sewer systems. In particular, we provide a statistically-based framework to be used in the calibration of the management rules, facing improved quality of overflows from sewer systems. Typical pumping rules favor a massive delivery of stormwater volumes to the wastewater treatment plans, requiring large storage tanks in the sewer network, heavy pumping power and reducing the efficiency of the treatment plant due to pollutant dilution. In this study we show that it is possible to optimize the pumping rule in order to reduce pumped volumes to the plant (thus saving energy), while simultaneously keeping high pollutant concentration. On the other hand, larger low-concentration overflow volumes are released outside the sewer network with respect to the standard

  17. On an Approach to Bayesian Sample Sizing in Clinical Trials

    CERN Document Server

    Muirhead, Robb J

    2012-01-01

    This paper explores an approach to Bayesian sample size determination in clinical trials. The approach falls into the category of what is often called "proper Bayesian", in that it does not mix frequentist concepts with Bayesian ones. A criterion for a "successful trial" is defined in terms of a posterior probability, its probability is assessed using the marginal distribution of the data, and this probability forms the basis for choosing sample sizes. We illustrate with a standard problem in clinical trials, that of establishing superiority of a new drug over a control.

  18. Economic Statistical Design of integrated X-bar-S control chart with Preventive Maintenance and general failure distribution.

    Science.gov (United States)

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC.

  19. Using Statistical Process Control Charts to Identify the Steroids Era in Major League Baseball: An Educational Exercise

    Science.gov (United States)

    Hill, Stephen E.; Schvaneveldt, Shane J.

    2011-01-01

    This article presents an educational exercise in which statistical process control charts are constructed and used to identify the Steroids Era in American professional baseball. During this period (roughly 1993 until the present), numerous baseball players were alleged or proven to have used banned, performance-enhancing drugs. Also observed…

  20. Estimating the Time to Benefit for Preventive Drugs with the Statistical Process Control Method : An Example with Alendronate

    NARCIS (Netherlands)

    van de Glind, Esther M. M.; Willems, Hanna C.; Eslami, Saeid; Abu-Hanna, Ameen; Lems, Willem F.; Hooft, Lotty; de Rooij, Sophia E.; Black, Dennis M.; van Munster, Barbara C.

    2016-01-01

    For physicians dealing with patients with a limited life expectancy, knowing the time to benefit (TTB) of preventive medication is essential to support treatment decisions. The aim of this study was to investigate the usefulness of statistical process control (SPC) for determining the TTB in relatio

  1. Fieldcrest Cannon, Inc. Advanced Technical Preparation. Statistical Process Control (SPC). PRE-SPC 11: SPC & Graphs. Instructor Book.

    Science.gov (United States)

    Averitt, Sallie D.

    This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills in working with line graphs and teaching…

  2. Validating an automated outcomes surveillance application using data from a terminated randomized, controlled trial (OPUS [TIMI-16]).

    Science.gov (United States)

    Matheny, Michael E; Morrow, David A; Ohno-Machado, Lucila; Cannon, Christopher P; Resnic, Frederic S

    2007-10-11

    We sought to validate an automated outcomes surveillance system (DELTA) using OPUS (TIMI-16), a multi-center randomized, controlled trial that was stopped early due to elevated mortality in one of the two intervention arms. Methodologies that were incorporated into the application (Statistical Process Control [SPC] and Bayesian Updating Statistics [BUS]) were compared with standard Data Safety Monitoring Board (DSMB) protocols.

  3. A Bayesian formulation of seismic fragility analysis of safety related equipment

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z-L.; Pandey, M.; Xie, W-C., E-mail: z268wang@uwaterloo.ca, E-mail: mdpandey@uwaterloo.ca, E-mail: xie@uwaterloo.ca [Univ. of Waterloo, Ontario (Canada)

    2013-07-01

    A Bayesian approach to seismic fragility analysis of safety-related equipment is formulated. Unlike treating two sources of uncertainty of in the parameter estimation in two steps separately using the classical statistics, a Bayesian hierarchical model is advocated for interpreting and combining the various uncertainties more clearly in this article. In addition, with the availability of additional earthquake experience data and shaking table test results, a Bayesian approach to updating the fragility model of safety-related equipment is formulated by incorporating acquired failure and survivor evidence. Numerical results show the significance in fragility analysis using the Bayesian approach. (author)

  4. Applying Statistical Design to Control the Risk of Over-Design with Stochastic Simulation

    Directory of Open Access Journals (Sweden)

    Yi Wu

    2010-02-01

    Full Text Available By comparing a hard real-time system and a soft real-time system, this article elicits the risk of over-design in soft real-time system designing. To deal with this risk, a novel concept of statistical design is proposed. The statistical design is the process accurately accounting for and mitigating the effects of variation in part geometry and other environmental conditions, while at the same time optimizing a target performance factor. However, statistical design can be a very difficult and complex task when using clas-sical mathematical methods. Thus, a simulation methodology to optimize the design is proposed in order to bridge the gap between real-time analysis and optimization for robust and reliable system design.

  5. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  6. Bayesian methods for the design and analysis of noninferiority trials.

    Science.gov (United States)

    Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen

    2016-01-01

    The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.

  7. Solving flexible job-shop scheduling problems with Bayesian statistical inference-based estimation of distribution algorithm%基于Bayesian统计推理的分布估计算法求解柔性作业车间调度问题

    Institute of Scientific and Technical Information of China (English)

    何小娟; 曾建潮

    2012-01-01

    在Bayesian统计推理理论的基础上,提出一种新的求解柔性车间调度问题的分布估计算法.首先,根据所有工件的工序排列顺序提取进化过程中种群的优良信息,建立一个不断更新的先验分布概率模型,再以相邻工序出现的频率为基础建立条件概率模型;然后,结合两个模型的信息使用Bayesian公式建立一个后验概率模型,该模型综合了进化过程中不断更新的优良信息和相邻工序出现的频率信息,可用以更好地指导产生新群体.仿真结果表明算法具有较好的寻优能力.%A new estimation of distribution algorithm for flexible job-shop scheduling problems based on Bayesian statistical inference theory is proposed. At first, according to the permutation of all operations, the model of priori distribution probability is built extracting the information of superior solutions updating, then the model of conditional probability is also built based on the frequencies of neighbor operations appearing. After then, the model of posterior probability is given by combining the above two models to guide new population generating with Bayesian formula. Such model is characteristic of guiding new population generating well, for it synthesizes the information both of updating knowledge and of neighbor operations appearing frequencies. The simulation results show that the proposed algorithm has the preferable search ability.

  8. Prediction of lacking control power in power plants using statistical models

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Mataji, B.; Stoustrup, Jakob

    2007-01-01

    errors; the second uses operating point depending statistics of prediction errors. Using these methods on the previous mentioned case, it can be concluded that the second method can be used to predict the power plant performance, while the first method has problems predicting the uncertain performance...

  9. Robust Medical Test Evaluation Using Flexible Bayesian Semiparametric Regression Models

    Directory of Open Access Journals (Sweden)

    Adam J. Branscum

    2013-01-01

    Full Text Available The application of Bayesian methods is increasing in modern epidemiology. Although parametric Bayesian analysis has penetrated the population health sciences, flexible nonparametric Bayesian methods have received less attention. A goal in nonparametric Bayesian analysis is to estimate unknown functions (e.g., density or distribution functions rather than scalar parameters (e.g., means or proportions. For instance, ROC curves are obtained from the distribution functions corresponding to continuous biomarker data taken from healthy and diseased populations. Standard parametric approaches to Bayesian analysis involve distributions with a small number of parameters, where the prior specification is relatively straight forward. In the nonparametric Bayesian case, the prior is placed on an infinite dimensional space of all distributions, which requires special methods. A popular approach to nonparametric Bayesian analysis that involves Polya tree prior distributions is described. We provide example code to illustrate how models that contain Polya tree priors can be fit using SAS software. The methods are used to evaluate the covariate-specific accuracy of the biomarker, soluble epidermal growth factor receptor, for discerning lung cancer cases from controls using a flexible ROC regression modeling framework. The application highlights the usefulness of flexible models over a standard parametric method for estimating ROC curves.

  10. Statistical Methods for Astronomy

    CERN Document Server

    Feigelson, Eric D

    2012-01-01

    This review outlines concepts of mathematical statistics, elements of probability theory, hypothesis tests and point estimation for use in the analysis of modern astronomical data. Least squares, maximum likelihood, and Bayesian approaches to statistical inference are treated. Resampling methods, particularly the bootstrap, provide valuable procedures when distributions functions of statistics are not known. Several approaches to model selection and good- ness of fit are considered. Applied statistics relevant to astronomical research are briefly discussed: nonparametric methods for use when little is known about the behavior of the astronomical populations or processes; data smoothing with kernel density estimation and nonparametric regression; unsupervised clustering and supervised classification procedures for multivariate problems; survival analysis for astronomical datasets with nondetections; time- and frequency-domain times series analysis for light curves; and spatial statistics to interpret the spati...

  11. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  12. Spatial and spatio-temporal bayesian models with R - INLA

    CERN Document Server

    Blangiardo, Marta

    2015-01-01

    Dedication iiiPreface ix1 Introduction 11.1 Why spatial and spatio-temporal statistics? 11.2 Why do we use Bayesian methods for modelling spatial and spatio-temporal structures? 21.3 Why INLA? 31.4 Datasets 32 Introduction to 212.1 The language 212.2 objects 222.3 Data and session management 342.4 Packages 352.5 Programming in 362.6 Basic statistical analysis with 393 Introduction to Bayesian Methods 533.1 Bayesian Philosophy 533.2 Basic Probability Elements 573.3 Bayes Theorem 623.4 Prior and Posterior Distributions 643.5 Working with the Posterior Distribution 663.6 Choosing the Prior Distr

  13. The subjectivity of scientists and the Bayesian approach

    CERN Document Server

    Press, James S

    2016-01-01

    "Press and Tanur argue that subjectivity has not only played a significant role in the advancement of science but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the more classical twentieth-century methods." — SciTech Book News. "An insightful work." ― Choice. "Compilation of interesting popular problems … this book is fascinating." — Short Book Reviews, International Statistical Institute. Subjectivity ― including intuition, hunches, and personal beliefs ― has played a key role in scientific discovery. This intriguing book illustrates subjective influences on scientific progress with historical accounts and biographical sketches of more than a dozen luminaries, including Aristotle, Galileo, Newton, Darwin, Pasteur, Freud, Einstein, Margaret Mead, and others. The treatment also offers a detailed examination of the modern Bayesian approach to data analysis, with references to the Bayesian theoretical and applied literature. Suitable for...

  14. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  15. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  16. BNFinder2: Faster Bayesian network learning and Bayesian classification.

    Science.gov (United States)

    Dojer, Norbert; Bednarz, Pawel; Podsiadlo, Agnieszka; Wilczynski, Bartek

    2013-08-15

    Bayesian Networks (BNs) are versatile probabilistic models applicable to many different biological phenomena. In biological applications the structure of the network is usually unknown and needs to be inferred from experimental data. BNFinder is a fast software implementation of an exact algorithm for finding the optimal structure of the network given a number of experimental observations. Its second version, presented in this article, represents a major improvement over the previous version. The improvements include (i) a parallelized learning algorithm leading to an order of magnitude speed-ups in BN structure learning time; (ii) inclusion of an additional scoring function based on mutual information criteria; (iii) possibility of choosing the resulting network specificity based on statistical criteria and (iv) a new module for classification by BNs, including cross-validation scheme and classifier quality measurements with receiver operator characteristic scores. BNFinder2 is implemented in python and freely available under the GNU general public license at the project Web site https://launchpad.net/bnfinder, together with a user's manual, introductory tutorial and supplementary methods.

  17. Bayesian Approach for Inconsistent Information.

    Science.gov (United States)

    Stein, M; Beer, M; Kreinovich, V

    2013-10-01

    In engineering situations, we usually have a large amount of prior knowledge that needs to be taken into account when processing data. Traditionally, the Bayesian approach is used to process data in the presence of prior knowledge. Sometimes, when we apply the traditional Bayesian techniques to engineering data, we get inconsistencies between the data and prior knowledge. These inconsistencies are usually caused by the fact that in the traditional approach, we assume that we know the exact sample values, that the prior distribution is exactly known, etc. In reality, the data is imprecise due to measurement errors, the prior knowledge is only approximately known, etc. So, a natural way to deal with the seemingly inconsistent information is to take this imprecision into account in the Bayesian approach - e.g., by using fuzzy techniques. In this paper, we describe several possible scenarios for fuzzifying the Bayesian approach. Particular attention is paid to the interaction between the estimated imprecise parameters. In this paper, to implement the corresponding fuzzy versions of the Bayesian formulas, we use straightforward computations of the related expression - which makes our computations reasonably time-consuming. Computations in the traditional (non-fuzzy) Bayesian approach are much faster - because they use algorithmically efficient reformulations of the Bayesian formulas. We expect that similar reformulations of the fuzzy Bayesian formulas will also drastically decrease the computation time and thus, enhance the practical use of the proposed methods.

  18. Classification using Bayesian neural nets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); O. van der Meer; R. Potharst (Rob)

    1995-01-01

    textabstractRecently, Bayesian methods have been proposed for neural networks to solve regression and classification problems. These methods claim to overcome some difficulties encountered in the standard approach such as overfitting. However, an implementation of the full Bayesian approach to neura

  19. Interactive Instruction in Bayesian Inference

    DEFF Research Database (Denmark)

    Khan, Azam; Breslav, Simon; Hornbæk, Kasper

    2017-01-01

    An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....

  20. Bayesian Intersubjectivity and Quantum Theory

    Science.gov (United States)

    Pérez-Suárez, Marcos; Santos, David J.

    2005-02-01

    Two of the major approaches to probability, namely, frequentism and (subjectivistic) Bayesian theory, are discussed, together with the replacement of frequentist objectivity for Bayesian intersubjectivity. This discussion is then expanded to Quantum Theory, as quantum states and operations can be seen as structural elements of a subjective nature.