WorldWideScience

Sample records for bayesian cost-effectiveness models

  1. Bayesian models for cost-effectiveness analysis in the presence of structural zero costs.

    Science.gov (United States)

    Baio, Gianluca

    2014-05-20

    Bayesian modelling for cost-effectiveness data has received much attention in both the health economics and the statistical literature, in recent years. Cost-effectiveness data are characterised by a relatively complex structure of relationships linking a suitable measure of clinical benefit (e.g. quality-adjusted life years) and the associated costs. Simplifying assumptions, such as (bivariate) normality of the underlying distributions, are usually not granted, particularly for the cost variable, which is characterised by markedly skewed distributions. In addition, individual-level data sets are often characterised by the presence of structural zeros in the cost variable. Hurdle models can be used to account for the presence of excess zeros in a distribution and have been applied in the context of cost data. We extend their application to cost-effectiveness data, defining a full Bayesian specification, which consists of a model for the individual probability of null costs, a marginal model for the costs and a conditional model for the measure of effectiveness (given the observed costs). We presented the model using a working example to describe its main features. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  2. Bayesian cost-effectiveness analysis with the R package BCEA

    CERN Document Server

    Baio, Gianluca; Heath, Anna

    2017-01-01

    The book provides a description of the process of health economic evaluation and modelling for cost-effectiveness analysis, particularly from the perspective of a Bayesian statistical approach. Some relevant theory and introductory concepts are presented using practical examples and two running case studies. The book also describes in detail how to perform health economic evaluations using the R package BCEA (Bayesian Cost-Effectiveness Analysis). BCEA can be used to post-process the results of a Bayesian cost-effectiveness model and perform advanced analyses producing standardised and highly customisable outputs. It presents all the features of the package, including its many functions and their practical application, as well as its user-friendly web interface. The book is a valuable resource for statisticians and practitioners working in the field of health economics wanting to simplify and standardise their workflow, for example in the preparation of dossiers in support of marketing authorisation, or acade...

  3. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  4. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...

  5. [Bayesian approach for the cost-effectiveness evaluation of healthcare technologies].

    Science.gov (United States)

    Berchialla, Paola; Gregori, Dario; Brunello, Franco; Veltri, Andrea; Petrinco, Michele; Pagano, Eva

    2009-01-01

    The development of Bayesian statistical methods for the assessment of the cost-effectiveness of health care technologies is reviewed. Although many studies adopt a frequentist approach, several authors have advocated the use of Bayesian methods in health economics. Emphasis has been placed on the advantages of the Bayesian approach, which include: (i) the ability to make more intuitive and meaningful inferences; (ii) the ability to tackle complex problems, such as allowing for the inclusion of patients who generate no cost, thanks to the availability of powerful computational algorithms; (iii) the importance of a full use of quantitative and structural prior information to produce realistic inferences. Much literature comparing the cost-effectiveness of two treatments is based on the incremental cost-effectiveness ratio. However, new methods are arising with the purpose of decision making. These methods are based on a net benefits approach. In the present context, the cost-effectiveness acceptability curves have been pointed out to be intrinsically Bayesian in their formulation. They plot the probability of a positive net benefit against the threshold cost of a unit increase in efficacy.A case study is presented in order to illustrate the Bayesian statistics in the cost-effectiveness analysis. Emphasis is placed on the cost-effectiveness acceptability curves. Advantages and disadvantages of the method described in this paper have been compared to frequentist methods and discussed.

  6. A Departmental Cost-Effectiveness Model.

    Science.gov (United States)

    Holleman, Thomas, Jr.

    In establishing a departmental cost-effectiveness model, the traditional cost-effectiveness model was discussed and equipped with a distant and deflation equation for both benefits and costs. Next, the economics of costing was examined and program costing procedures developed. Then, the model construct was described as it was structured around the…

  7. Bayesian comparison of cost-effectiveness of different clinical approaches to diagnose coronary artery disease

    International Nuclear Information System (INIS)

    Patterson, R.E.; Eng, C.; Horowitz, S.F.; Gorlin, R.; Goldstein, S.R.

    1984-01-01

    The objective of this study was to compare the cost-effectiveness of four clinical policies (policies I to IV) in the diagnosis of the presence or absence of coronary artery disease. A model based on Bayes theorem and published clinical data was constructed to make these comparisons. Effectiveness was defined as either the number of patients with coronary disease diagnosed or as the number of quality-adjusted life years extended by therapy after the diagnosis of coronary disease. The following conclusions arise strictly from analysis of the model and may not necessarily be applicable to all situations. As prevalence of coronary disease in the population increased, it caused a linear increase in cost per patient tested, but a hyperbolic decrease in cost per effect, that is, increased cost-effectiveness. Thus, cost-effectiveness of all policies (I to IV) was poor in populations with a prevalence of disease below 10%. Analysis of the model also indicates that at prevalences less than 80%, exercise thallium scintigraphy alone as a first test (policy II) is a more cost-effective initial test than is exercise electrocardiography alone as a first test (policy I) or exercise electrocardiography first combined with thallium imaging as a second test (policy IV). Exercise electrocardiography before thallium imaging (policy IV) is more cost-effective than exercise electrocardiography alone (policy I) at prevalences less than 80%. 4) Noninvasive exercise testing before angiography (policies I, II and IV) is more cost-effective than using coronary angiography as the first and only test (policy III) at prevalences less than 80%. 5) Above a threshold value of prevalence of 80% (for example patients with typical angina), proceeding to angiography as the first test (policy III) was more cost-effective than initial noninvasive exercise tests (policies I, II and IV)

  8. Cost effectiveness of recycling: A systems model

    Energy Technology Data Exchange (ETDEWEB)

    Tonjes, David J., E-mail: david.tonjes@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States); Waste Reduction and Management Institute, School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY 11794-5000 (United States); Center for Bioenergy Research and Development, Advanced Energy Research and Technology Center, Stony Brook University, 1000 Innovation Rd., Stony Brook, NY 11794-6044 (United States); Mallikarjun, Sreekanth, E-mail: sreekanth.mallikarjun@stonybrook.edu [Department of Technology and Society, College of Engineering and Applied Sciences, Stony Brook University, Stony Brook, NY 11794-3560 (United States)

    2013-11-15

    Highlights: • Curbside collection of recyclables reduces overall system costs over a range of conditions. • When avoided costs for recyclables are large, even high collection costs are supported. • When avoided costs for recyclables are not great, there are reduced opportunities for savings. • For common waste compositions, maximizing curbside recyclables collection always saves money. - Abstract: Financial analytical models of waste management systems have often found that recycling costs exceed direct benefits, and in order to economically justify recycling activities, externalities such as household expenses or environmental impacts must be invoked. Certain more empirically based studies have also found that recycling is more expensive than disposal. Other work, both through models and surveys, have found differently. Here we present an empirical systems model, largely drawn from a suburban Long Island municipality. The model accounts for changes in distribution of effort as recycling tonnages displace disposal tonnages, and the seven different cases examined all show that curbside collection programs that manage up to between 31% and 37% of the waste stream should result in overall system savings. These savings accrue partially because of assumed cost differences in tip fees for recyclables and disposed wastes, and also because recycling can result in a more efficient, cost-effective collection program. These results imply that increases in recycling are justifiable due to cost-savings alone, not on more difficult to measure factors that may not impact program budgets.

  9. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  10. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  11. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  12. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  13. A Layered Decision Model for Cost-Effective System Security

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Huaqiang; Alves-Foss, James; Soule, Terry; Pforsich, Hugh; Zhang, Du; Frincke, Deborah A.

    2008-10-01

    System security involves decisions in at least three areas: identification of well-defined security policies, selection of cost-effective defence strategies, and implementation of real-time defence tactics. Although choices made in each of these areas affect the others, existing decision models typically handle these three decision areas in isolation. There is no comprehensive tool that can integrate them to provide a single efficient model for safeguarding a network. In addition, there is no clear way to determine which particular combinations of defence decisions result in cost-effective solutions. To address these problems, this paper introduces a Layered Decision Model (LDM) for use in deciding how to address defence decisions based on their cost-effectiveness. To validate the LDM and illustrate how it is used, we used simulation to test model rationality and applied the LDM to the design of system security for an e-commercial business case.

  14. Chain Risk Model for quantifying cost effectiveness of phytosanitary measures

    NARCIS (Netherlands)

    Benninga, J.; Hennen, W.H.G.J.; Schans, van de J.

    2010-01-01

    A Chain Risk Model (CRM) was developed for a cost effective assessment of phytosanitary measures. The CRM model can be applied to phytosanitary assessments of all agricultural product chains. In CRM, stages are connected by product volume flows with which pest infections can be spread from one stage

  15. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  16. How to practise Bayesian statistics outside the Bayesian church: What philosophy for Bayesian statistical modelling?

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science

  17. Modelling the cost effectiveness of antidepressant treatment in primary care.

    Science.gov (United States)

    Revicki, D A; Brown, R E; Palmer, W; Bakish, D; Rosser, W W; Anton, S F; Feeny, D

    1995-12-01

    The aim of this study was to estimate the cost effectiveness of nefazodone compared with imipramine or fluoxetine in treating women with major depressive disorder. Clinical decision analysis and a Markov state-transition model were used to estimate the lifetime health outcomes and medical costs of 3 antidepressant treatments. The model, which represents ideal primary care practice, compares treatment with nefazodone to treatment with either imipramine or fluoxetine. The economic analysis was based on the healthcare system of the Canadian province of Ontario, and considered only direct medical costs. Health outcomes were expressed as quality-adjusted life years (QALYs) and costs were in 1993 Canadian dollars ($Can; $Can1 = $US0.75, September 1995). Incremental cost-utility ratios were calculated comparing the relative lifetime discounted medical costs and QALYs associated with nefazodone with those of imipramine or fluoxetine. Data for constructing the model and estimating necessary parameters were derived from the medical literature, clinical trial data, and physician judgement. Data included information on: Ontario primary care physicians' clinical management of major depression; medical resource use and costs; probabilities of recurrence of depression; suicide rates; compliance rates; and health utilities. Estimates of utilities for depression-related hypothetical health states were obtained from patients with major depression (n = 70). Medical costs and QALYs were discounted to present value using a 5% rate. Sensitivity analyses tested the assumptions of the model by varying the discount rate, depression recurrence rates, compliance rates, and the duration of the model. The base case analysis found that nefazodone treatment costs $Can1447 less per patient than imipramine treatment (discounted lifetime medical costs were $Can50,664 vs $Can52,111) and increases the number of QALYs by 0.72 (13.90 vs 13.18). Nefazodone treatment costs $Can14 less than fluoxetine

  18. Dynamic modeling of cost-effectiveness of rotavirus vaccination, Kazakhstan.

    Science.gov (United States)

    Freiesleben de Blasio, Birgitte; Flem, Elmira; Latipov, Renat; Kuatbaeva, Ajnagul; Kristiansen, Ivar Sønbø

    2014-01-01

    The government of Kazakhstan, a middle-income country in Central Asia, is considering the introduction of rotavirus vaccination into its national immunization program. We performed a cost-effectiveness analysis of rotavirus vaccination spanning 20 years by using a synthesis of dynamic transmission models accounting for herd protection. We found that a vaccination program with 90% coverage would prevent ≈880 rotavirus deaths and save an average of 54,784 life-years for children vaccine cost at vaccination program costs would be entirely offset. To further evaluate efficacy of a vaccine program, benefits of indirect protection conferred by vaccination warrant further study.

  19. A Cost-Effectiveness Analysis Model for Evaluating and Planning Secondary Vocational Programs

    Science.gov (United States)

    Kim, Jin Eun

    1977-01-01

    This paper conceptualizes a cost-effectiveness analysis and describes a cost-effectiveness analysis model for secondary vocational programs. It generates three kinds of cost-effectiveness measures: program effectiveness, cost efficiency, and cost-effectiveness and/or performance ratio. (Author)

  20. Dynamic Modeling of Cost-effectiveness of Rotavirus Vaccination, Kazakhstan

    Science.gov (United States)

    Flem, Elmira; Latipov, Renat; Kuatbaeva, Ajnagul; Kristiansen, Ivar Sønbø

    2014-01-01

    The government of Kazakhstan, a middle-income country in Central Asia, is considering the introduction of rotavirus vaccination into its national immunization program. We performed a cost-effectiveness analysis of rotavirus vaccination spanning 20 years by using a synthesis of dynamic transmission models accounting for herd protection. We found that a vaccination program with 90% coverage would prevent ≈880 rotavirus deaths and save an average of 54,784 life-years for children <5 years of age. Indirect protection accounted for 40% and 60% reduction in severe and mild rotavirus gastroenteritis, respectively. Cost per life year gained was US $18,044 from a societal perspective and US $23,892 from a health care perspective. Comparing the 2 key parameters of cost-effectiveness, mortality rates and vaccine cost at

  1. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  2. A Bayesian model for binary Markov chains

    Directory of Open Access Journals (Sweden)

    Belkheir Essebbar

    2004-02-01

    Full Text Available This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.

  3. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  4. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  5. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  6. Bayesian Modeling of a Human MMORPG Player

    Science.gov (United States)

    Synnaeve, Gabriel; Bessière, Pierre

    2011-03-01

    This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.

  7. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross-entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  8. Modeling the cost-effectiveness of health care systems for alcohol use disorders: how implementation of eHealth interventions improves cost-effectiveness

    NARCIS (Netherlands)

    Smit, Filip; Lokkerbol, Joran; Riper, Heleen; Majo, Maria Cristina; Boon, Brigitte; Blankers, Matthijs

    2011-01-01

    Informing policy decisions about the cost-effectiveness of health care systems (ie, packages of clinical interventions) is probably best done using a modeling approach. To this end, an alcohol model (ALCMOD) was developed. The aim of ALCMOD is to estimate the cost-effectiveness of competing health

  9. A Bayesian cost-effectiveness analysis of a telemedicine-based strategy for the management of sleep apnoea: a multicentre randomised controlled trial.

    Science.gov (United States)

    Isetta, Valentina; Negrín, Miguel A; Monasterio, Carmen; Masa, Juan F; Feu, Nuria; Álvarez, Ainhoa; Campos-Rodriguez, Francisco; Ruiz, Concepción; Abad, Jorge; Vázquez-Polo, Francisco J; Farré, Ramon; Galdeano, Marina; Lloberes, Patricia; Embid, Cristina; de la Peña, Mónica; Puertas, Javier; Dalmases, Mireia; Salord, Neus; Corral, Jaime; Jurado, Bernabé; León, Carmen; Egea, Carlos; Muñoz, Aida; Parra, Olga; Cambrodi, Roser; Martel-Escobar, María; Arqué, Meritxell; Montserrat, Josep M

    2015-11-01

    Compliance with continuous positive airway pressure (CPAP) therapy is essential in patients with obstructive sleep apnoea (OSA), but adequate control is not always possible. This is clinically important because CPAP can reverse the morbidity and mortality associated with OSA. Telemedicine, with support provided via a web platform and video conferences, could represent a cost-effective alternative to standard care management. To assess the telemedicine impact on treatment compliance, cost-effectiveness and improvement in quality of life (QoL) when compared with traditional face-to-face follow-up. A randomised controlled trial was performed to compare a telemedicine-based CPAP follow-up strategy with standard face-to-face management. Consecutive OSA patients requiring CPAP treatment, with sufficient internet skills and who agreed to participate, were enrolled. They were followed-up at 1, 3 and 6 months and answered surveys about sleep, CPAP side effects and lifestyle. We compared CPAP compliance, cost-effectiveness and QoL between the beginning and the end of the study. A Bayesian cost-effectiveness analysis with non-informative priors was performed. We randomised 139 patients. At 6 months, we found similar levels of CPAP compliance, and improved daytime sleepiness, QoL, side effects and degree of satisfaction in both groups. Despite requiring more visits, the telemedicine group was more cost-effective: costs were lower and differences in effectiveness were not relevant. A telemedicine-based strategy for the follow-up of CPAP treatment in patients with OSA was as effective as standard hospital-based care in terms of CPAP compliance and symptom improvement, with comparable side effects and satisfaction rates. The telemedicine-based strategy had lower total costs due to savings on transport and less lost productivity (indirect costs). NCT01716676. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go

  10. Robust bayesian analysis of an autoregressive model with ...

    African Journals Online (AJOL)

    In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...

  11. Can Economic Model Transparency Improve Provider Interpretation of Cost-effectiveness Analysis? Evaluating Tradeoffs Presented by the Second Panel on Cost-effectiveness in Health and Medicine.

    Science.gov (United States)

    Padula, William V; McQueen, Robert Brett; Pronovost, Peter J

    2017-11-01

    The Second Panel on Cost-Effectiveness in Health and Medicine convened on December 7, 2016 at the National Academy of Medicine to disseminate their recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses (CEAs). Following its summary, panel proceedings included lengthy discussions including the field's struggle to disseminate findings efficiently through peer-reviewed literature to target audiences. With editors of several medical and outcomes research journals in attendance, there was consensus that findings of cost-effectiveness analyses do not effectively reach other researchers or health care providers. The audience members suggested several solutions including providing additional training to clinicians in cost-effectiveness research and requiring that cost-effectiveness models are made publicly available. However, there remains the questions of whether making economic modelers' work open-access through journals is fair under the defense that these models remain one's own intellectual property, or whether journals can properly manage the peer-review process specifically for cost-effectiveness analyses. In this article, we elaborate on these issues and provide some suggested solutions that may increase the dissemination and application of cost-effectiveness literature to reach its intended audiences and ultimately benefit the patient. Ultimately, it is our combined view as economic modelers and clinicians that cost-effectiveness results need to reach the clinician to improve the efficiency of medical practice, but that open-access models do not improve clinician access or interpretation of the economics of medicine.

  12. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  13. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  14. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  15. Cost-effectiveness analysis of diarrhoea management approaches in Nigeria: A decision analytical model.

    Directory of Open Access Journals (Sweden)

    Charles E Okafor

    2017-12-01

    Full Text Available Diarrhoea is a leading cause of death in Nigerian children under 5 years. Implementing the most cost-effective approach to diarrhoea management in Nigeria will help optimize health care resources allocation. This study evaluated the cost-effectiveness of various approaches to diarrhoea management namely: the 'no treatment' approach (NT; the preventive approach with rotavirus vaccine; the integrated management of childhood illness for diarrhoea approach (IMCI; and rotavirus vaccine plus integrated management of childhood illness for diarrhoea approach (rotavirus vaccine + IMCI.Markov cohort model conducted from the payer's perspective was used to calculate the cost-effectiveness of the four interventions. The markov model simulated a life cycle of 260 weeks for 33 million children under five years at risk of having diarrhoea (well state. Disability adjusted life years (DALYs averted was used to quantify clinical outcome. Incremental cost-effectiveness ratio (ICER served as measure of cost-effectiveness.Based on cost-effectiveness threshold of $2,177.99 (i.e. representing Nigerian GDP/capita, all the approaches were very cost-effective but rotavirus vaccine approach was dominated. While IMCI has the lowest ICER of $4.6/DALY averted, the addition of rotavirus vaccine was cost-effective with an ICER of $80.1/DALY averted. Rotavirus vaccine alone was less efficient in optimizing health care resource allocation.Rotavirus vaccine + IMCI approach was the most cost-effective approach to childhood diarrhoea management. Its awareness and practice should be promoted in Nigeria. Addition of rotavirus vaccine should be considered for inclusion in the national programme of immunization. Although our findings suggest that addition of rotavirus vaccine to IMCI for diarrhoea is cost-effective, there may be need for further vaccine demonstration studies or real life studies to establish the cost-effectiveness of the vaccine in Nigeria.

  16. Bayesian Modelling of Functional Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Røge, Rasmus

    the prevalent strategy of standardizing of fMRI time series and model data using directional statistics or we model the variability in the signal across the brain and across multiple subjects. In either case, we use Bayesian nonparametric modeling to automatically learn from the fMRI data the number......This thesis deals with parcellation of whole-brain functional magnetic resonance imaging (fMRI) using Bayesian inference with mixture models tailored to the fMRI data. In the three included papers and manuscripts, we analyze two different approaches to modeling fMRI signal; either we accept...... of funcional units, i.e. parcels. We benchmark the proposed mixture models against state of the art methods of brain parcellation, both probabilistic and non-probabilistic. The time series of each voxel are most often standardized using z-scoring which projects the time series data onto a hypersphere...

  17. Modelling dependable systems using hybrid Bayesian networks

    International Nuclear Information System (INIS)

    Neil, Martin; Tailor, Manesh; Marquez, David; Fenton, Norman; Hearty, Peter

    2008-01-01

    A hybrid Bayesian network (BN) is one that incorporates both discrete and continuous nodes. In our extensive applications of BNs for system dependability assessment, the models are invariably hybrid and the need for efficient and accurate computation is paramount. We apply a new iterative algorithm that efficiently combines dynamic discretisation with robust propagation algorithms on junction tree structures to perform inference in hybrid BNs. We illustrate its use in the field of dependability with two example of reliability estimation. Firstly we estimate the reliability of a simple single system and next we implement a hierarchical Bayesian model. In the hierarchical model we compute the reliability of two unknown subsystems from data collected on historically similar subsystems and then input the result into a reliability block model to compute system level reliability. We conclude that dynamic discretisation can be used as an alternative to analytical or Monte Carlo methods with high precision and can be applied to a wide range of dependability problems

  18. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  19. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  20. Bayesian methodology for reliability model acceptance

    International Nuclear Information System (INIS)

    Zhang Ruoxue; Mahadevan, Sankaran

    2003-01-01

    This paper develops a methodology to assess the reliability computation model validity using the concept of Bayesian hypothesis testing, by comparing the model prediction and experimental observation, when there is only one computational model available to evaluate system behavior. Time-independent and time-dependent problems are investigated, with consideration of both cases: with and without statistical uncertainty in the model. The case of time-independent failure probability prediction with no statistical uncertainty is a straightforward application of Bayesian hypothesis testing. However, for the life prediction (time-dependent reliability) problem, a new methodology is developed in this paper to make the same Bayesian hypothesis testing concept applicable. With the existence of statistical uncertainty in the model, in addition to the application of a predictor estimator of the Bayes factor, the uncertainty in the Bayes factor is explicitly quantified through treating it as a random variable and calculating the probability that it exceeds a specified value. The developed method provides a rational criterion to decision-makers for the acceptance or rejection of the computational model

  1. Bayesian network modelling of upper gastrointestinal bleeding

    Science.gov (United States)

    Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri

    2013-09-01

    Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.

  2. Network structure exploration via Bayesian nonparametric models

    International Nuclear Information System (INIS)

    Chen, Y; Wang, X L; Xiang, X; Tang, B Z; Bu, J Z

    2015-01-01

    Complex networks provide a powerful mathematical representation of complex systems in nature and society. To understand complex networks, it is crucial to explore their internal structures, also called structural regularities. The task of network structure exploration is to determine how many groups there are in a complex network and how to group the nodes of the network. Most existing structure exploration methods need to specify either a group number or a certain type of structure when they are applied to a network. In the real world, however, the group number and also the certain type of structure that a network has are usually unknown in advance. To explore structural regularities in complex networks automatically, without any prior knowledge of the group number or the certain type of structure, we extend a probabilistic mixture model that can handle networks with any type of structure but needs to specify a group number using Bayesian nonparametric theory. We also propose a novel Bayesian nonparametric model, called the Bayesian nonparametric mixture (BNPM) model. Experiments conducted on a large number of networks with different structures show that the BNPM model is able to explore structural regularities in networks automatically with a stable, state-of-the-art performance. (paper)

  3. Bayesian Recurrent Neural Network for Language Modeling.

    Science.gov (United States)

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  4. Centralized Bayesian reliability modelling with sensor networks

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 19, č. 5 (2013), s. 471-482 ISSN 1387-3954 R&D Projects: GA MŠk 7D12004 Grant - others:GA MŠk(CZ) SVV-265315 Keywords : Bayesian modelling * Sensor network * Reliability Subject RIV: BD - Theory of Information Impact factor: 0.984, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0392551.pdf

  5. Cost Effective Community Based Dementia Screening: A Markov Model Simulation

    Directory of Open Access Journals (Sweden)

    Erin Saito

    2014-01-01

    Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.

  6. Cost-effectiveness of seven IVF strategies: results of a Markov decision-analytic model.

    Science.gov (United States)

    Fiddelers, Audrey A A; Dirksen, Carmen D; Dumoulin, John C M; van Montfoort, Aafke P A; Land, Jolande A; Janssen, J Marij; Evers, Johannes L H; Severens, Johan L

    2009-07-01

    A selective switch to elective single embryo transfer (eSET) in IVF has been suggested to prevent complications of fertility treatment for both mother and infants. We compared seven IVF strategies concerning their cost-effectiveness using a Markov model. The model was based on a three IVF-attempts time horizon and a societal perspective using real world strategies and data, comparing seven IVF strategies, concerning costs, live births and incremental cost-effectiveness ratios (ICERs). In order to increase pregnancy probability, one cycle of eSET + one cycle of standard treatment policy [STP, i.e. eSET in patients IVF treatment, combining several transfer policies was not cost-effective. A choice has to be made between three cycles of eSET, STP or DET. It depends, however, on society's willingness to pay which strategy is to be preferred from a cost-effectiveness point of view.

  7. Bayesian Inference of a Multivariate Regression Model

    Directory of Open Access Journals (Sweden)

    Marick S. Sinay

    2014-01-01

    Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.

  8. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  9. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  10. Distributed Bayesian Networks for User Modeling

    DEFF Research Database (Denmark)

    Tedesco, Roberto; Dolog, Peter; Nejdl, Wolfgang

    2006-01-01

    The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used by such ada......The World Wide Web is a popular platform for providing eLearning applications to a wide spectrum of users. However – as users differ in their preferences, background, requirements, and goals – applications should provide personalization mechanisms. In the Web context, user models used...... by such adaptive applications are often partial fragments of an overall user model. The fragments have then to be collected and merged into a global user profile. In this paper we investigate and present algorithms able to cope with distributed, fragmented user models – based on Bayesian Networks – in the context...... of Web-based eLearning platforms. The scenario we are tackling assumes learners who use several systems over time, which are able to create partial Bayesian Networks for user models based on the local system context. In particular, we focus on how to merge these partial user models. Our merge mechanism...

  11. Bayesian Spatial Modelling with R-INLA

    Directory of Open Access Journals (Sweden)

    Finn Lindgren

    2015-02-01

    Full Text Available The principles behind the interface to continuous domain spatial models in the R- INLA software package for R are described. The integrated nested Laplace approximation (INLA approach proposed by Rue, Martino, and Chopin (2009 is a computationally effective alternative to MCMC for Bayesian inference. INLA is designed for latent Gaussian models, a very wide and flexible class of models ranging from (generalized linear mixed to spatial and spatio-temporal models. Combined with the stochastic partial differential equation approach (SPDE, Lindgren, Rue, and Lindstrm 2011, one can accommodate all kinds of geographically referenced data, including areal and geostatistical ones, as well as spatial point process data. The implementation interface covers stationary spatial mod- els, non-stationary spatial models, and also spatio-temporal models, and is applicable in epidemiology, ecology, environmental risk assessment, as well as general geostatistics.

  12. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  13. Cost-effectiveness model for a specific mixture of prebiotics in The Netherlands

    NARCIS (Netherlands)

    Lenoir-Wijnkoop, I.; van Aalderen, W. M. C.; Boehm, G.; Klaassen, D.; Sprikkelman, A. B.; Nuijten, M. J. C.

    2012-01-01

    The objective of this study was to assess the cost-effectiveness of the use of prebiotics for the primary prevention of atopic dermatitis in The Netherlands. A model was constructed using decision analytical techniques. The model was developed to estimate the health economic impact of prebiotic

  14. Scale models: A proven cost-effective tool for outage planning

    Energy Technology Data Exchange (ETDEWEB)

    Lee, R. [Commonwealth Edison Co., Morris, IL (United States); Segroves, R. [Sargent & Lundy, Chicago, IL (United States)

    1995-03-01

    As generation costs for operating nuclear stations have risen, more nuclear utilities have initiated efforts to improve cost effectiveness. Nuclear plant owners are also being challenged with lower radiation exposure limits and new revised radiation protection related regulations (10 CFR 20), which places further stress on their budgets. As source term reduction activities continue to lower radiation fields, reducing the amount of time spent in radiation fields becomes one of the most cost-effective ways of reducing radiation exposure. An effective approach for minimizing time spent in radiation areas is to use a physical scale model for worker orientation planning and monitoring maintenance, modifications, and outage activities. To meet the challenge of continued reduction in the annual cumulative radiation exposures, new cost-effective tools are required. One field-tested and proven tool is the physical scale model.

  15. Assessment of global guidelines for preventive chemotherapy against schistosomiasis and soil-transmitted helminthiasis: a cost-effectiveness modelling study.

    Science.gov (United States)

    Lo, Nathan C; Lai, Ying-Si; Karagiannis-Voules, Dimitrios-Alexios; Bogoch, Isaac I; Coulibaly, Jean T; Bendavid, Eran; Utzinger, Jürg; Vounatsou, Penelope; Andrews, Jason R

    2016-09-01

    WHO guidelines recommend annual treatment for schistosomiasis or soil-transmitted helminthiasis when prevalence in school-aged children is at or above a threshold of 50% and 20%, respectively. Separate treatment guidelines are used for these two helminthiases, and integrated community-wide treatment is not recommended. We assessed the cost-effectiveness of changing prevalence thresholds and treatment guidelines under an integrated delivery framework. We developed a dynamic, age-structured transmission and cost-effectiveness model that simulates integrated preventive chemotherapy programmes against schistosomiasis and soil-transmitted helminthiasis. We assessed a 5-year treatment programme with praziquantel (40 mg/kg per treatment) against schistosomiasis and albendazole (400 mg per treatment) against soil-transmitted helminthiasis at 75% coverage. We defined strategies as highly cost-effective if the incremental cost-effectiveness ratio was less than the World Bank classification for a low-income country (gross domestic product of US$1045 per capita). We calculated the prevalence thresholds for cost-effective preventive chemotherapy of various strategies, and estimated treatment needs for sub-Saharan Africa. Annual preventive chemotherapy against schistosomiasis was highly cost-effective in treatment of school-aged children at a prevalence threshold of 5% (95% uncertainty interval [UI] 1·7-5·2; current guidelines recommend treatment at 50% prevalence) and for community-wide treatment at a prevalence of 15% (7·3-18·5; current recommendation is unclear, some community treatment recommended at 50% prevalence). Annual preventive chemotherapy against soil-transmitted helminthiasis was highly cost-effective in treatment of school-aged children at a prevalence of 20% (95% UI 5·4-30·5; current guidelines recommend treatment at 20% prevalence) and the entire community at 60% (35·3-85·1; no guidelines available). When both helminthiases were co-endemic, prevalence

  16. Adversarial life testing: A Bayesian negotiation model

    International Nuclear Information System (INIS)

    Rufo, M.J.; Martín, J.; Pérez, C.J.

    2014-01-01

    Life testing is a procedure intended for facilitating the process of making decisions in the context of industrial reliability. On the other hand, negotiation is a process of making joint decisions that has one of its main foundations in decision theory. A Bayesian sequential model of negotiation in the context of adversarial life testing is proposed. This model considers a general setting for which a manufacturer offers a product batch to a consumer. It is assumed that the reliability of the product is measured in terms of its lifetime. Furthermore, both the manufacturer and the consumer have to use their own information with respect to the quality of the product. Under these assumptions, two situations can be analyzed. For both of them, the main aim is to accept or reject the product batch based on the product reliability. This topic is related to a reliability demonstration problem. The procedure is applied to a class of distributions that belong to the exponential family. Thus, a unified framework addressing the main topics in the considered Bayesian model is presented. An illustrative example shows that the proposed technique can be easily applied in practice

  17. Cost Effectiveness of Screening Colonoscopy Depends on Adequate Bowel Preparation Rates - A Modeling Study.

    Directory of Open Access Journals (Sweden)

    James Kingsley

    Full Text Available Inadequate bowel preparation during screening colonoscopy necessitates repeating colonoscopy. Studies suggest inadequate bowel preparation rates of 20-60%. This increases the cost of colonoscopy for our society.The aim of this study is to determine the impact of inadequate bowel preparation rate on the cost effectiveness of colonoscopy compared to other screening strategies for colorectal cancer (CRC.A microsimulation model of CRC screening strategies for the general population at average risk for CRC. The strategies include fecal immunochemistry test (FIT every year, colonoscopy every ten years, sigmoidoscopy every five years, or stool DNA test every 3 years. The screening could be performed at private practice offices, outpatient hospitals, and ambulatory surgical centers.At the current assumed inadequate bowel preparation rate of 25%, the cost of colonoscopy as a screening strategy is above society's willingness to pay (<$50,000/QALY. Threshold analysis demonstrated that an inadequate bowel preparation rate of 13% or less is necessary before colonoscopy is considered more cost effective than FIT. At inadequate bowel preparation rates of 25%, colonoscopy is still more cost effective compared to sigmoidoscopy and stool DNA test. Sensitivity analysis of all inputs adjusted by ±10% showed incremental cost effectiveness ratio values were influenced most by the specificity, adherence, and sensitivity of FIT and colonoscopy.Screening colonoscopy is not a cost effective strategy when compared with fecal immunochemical test, as long as the inadequate bowel preparation rate is greater than 13%.

  18. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

    Science.gov (United States)

    Jones, Matt; Love, Bradley C

    2011-08-01

    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology - namely, Behaviorism and evolutionary psychology - that set aside mechanistic explanations or make use of optimality assumptions. Through these comparisons, we identify a number of challenges that limit the rational program's potential contribution to psychological theory. Specifically, rational Bayesian models are significantly unconstrained, both because they are uninformed by a wide range of process-level data and because their assumptions about the environment are generally not grounded in empirical measurement. The psychological implications of most Bayesian models are also unclear. Bayesian inference itself is conceptually trivial, but strong assumptions are often embedded in the hypothesis sets and the approximation algorithms used to derive model predictions, without a clear delineation between psychological commitments and implementational details. Comparing multiple Bayesian models of the same task is rare, as is the realization that many Bayesian models recapitulate existing (mechanistic level) theories. Despite the expressive power of current Bayesian models, we argue they must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition. We lay out several means for such an integration, which take into account the representations on which Bayesian inference operates, as well as the algorithms and heuristics that carry it out. We argue this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls

  19. Cost effectiveness of ovarian reserve testing in in vitro fertilization: a Markov decision-analytic model

    NARCIS (Netherlands)

    Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.

    2011-01-01

    To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF.

  20. Cost effectiveness of ovarian reserve testing in in vitro fertilization : a Markov decision-analytic model

    NARCIS (Netherlands)

    Moolenaar, Lobke M.; Broekmans, Frank J. M.; van Disseldorp, Jeroen; Fauser, Bart C. J. M.; Eijkemans, Marinus J. C.; Hompes, Peter G. A.; van der Veen, Fulco; Mol, Ben Willem J.

    2011-01-01

    Objective: To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). Design: A Markov decision model based on data from the literature and original patient data. Setting: Decision analytic framework. Patient(s): Computer-simulated cohort of subfertile women aged

  1. A cost-effective model for monitoring medicine use in Namibia: Outcomes and implications

    Directory of Open Access Journals (Sweden)

    Dan Kibuule

    2017-11-01

    Conclusions: A multisectoral collaborative model is cost-effective in medicine surveys, if there are mutual benefits. Student placements provide an opportunity to build local capacity for routine MUE. Ministries of Health should utilise this innovative approach to assess service delivery.

  2. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  3. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  4. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  5. Learning Bayesian Dependence Model for Student Modelling

    Directory of Open Access Journals (Sweden)

    Adina COCU

    2008-12-01

    Full Text Available Learning a Bayesian network from a numeric set of data is a challenging task because of dual nature of learning process: initial need to learn network structure, and then to find out the distribution probability tables. In this paper, we propose a machine-learning algorithm based on hill climbing search combined with Tabu list. The aim of learning process is to discover the best network that represents dependences between nodes. Another issue in machine learning procedure is handling numeric attributes. In order to do that, we must perform an attribute discretization pre-processes. This discretization operation can influence the results of learning network structure. Therefore, we make a comparative study to find out the most suitable combination between discretization method and learning algorithm, for a specific data set.

  6. Efficient Bayesian network modeling of systems

    International Nuclear Information System (INIS)

    Bensi, Michelle; Kiureghian, Armen Der; Straub, Daniel

    2013-01-01

    The Bayesian network (BN) is a convenient tool for probabilistic modeling of system performance, particularly when it is of interest to update the reliability of the system or its components in light of observed information. In this paper, BN structures for modeling the performance of systems that are defined in terms of their minimum link or cut sets are investigated. Standard BN structures that define the system node as a child of its constituent components or its minimum link/cut sets lead to converging structures, which are computationally disadvantageous and could severely hamper application of the BN to real systems. A systematic approach to defining an alternative formulation is developed that creates chain-like BN structures that are orders of magnitude more efficient, particularly in terms of computational memory demand. The formulation uses an integer optimization algorithm to identify the most efficient BN structure. Example applications demonstrate the proposed methodology and quantify the gained computational advantage

  7. Cost-effectiveness of screening for HIV in primary care: a health economics modelling analysis.

    Science.gov (United States)

    Baggaley, Rebecca F; Irvine, Michael A; Leber, Werner; Cambiano, Valentina; Figueroa, Jose; McMullen, Heather; Anderson, Jane; Santos, Andreia C; Terris-Prestholt, Fern; Miners, Alec; Hollingsworth, T Déirdre; Griffiths, Chris J

    2017-10-01

    Early HIV diagnosis reduces morbidity, mortality, the probability of onward transmission, and their associated costs, but might increase cost because of earlier initiation of antiretroviral treatment (ART). We investigated this trade-off by estimating the cost-effectiveness of HIV screening in primary care. We modelled the effect of the four-times higher diagnosis rate observed in the intervention arm of the RHIVA2 randomised controlled trial done in Hackney, London (UK), a borough with high HIV prevalence (≥0·2% adult prevalence). We constructed a dynamic, compartmental model representing incidence of infection and the effect of screening for HIV in general practices in Hackney. We assessed cost-effectiveness of the RHIVA2 trial by fitting model diagnosis rates to the trial data, parameterising with epidemiological and behavioural data from the literature when required, using trial testing costs and projecting future costs of treatment. Over a 40 year time horizon, incremental cost-effectiveness ratios were £22 201 (95% credible interval 12 662-132 452) per quality-adjusted life-year (QALY) gained, £372 207 (268 162-1 903 385) per death averted, and £628 874 (434 902-4 740 724) per HIV transmission averted. Under this model scenario, with UK cost data, RHIVA2 would reach the upper National Institute for Health and Care Excellence cost-effectiveness threshold (about £30 000 per QALY gained) after 33 years. Scenarios using cost data from Canada (which indicate prolonged and even higher health-care costs for patients diagnosed late) suggest this threshold could be reached in as little as 13 years. Screening for HIV in primary care has important public health benefits as well as clinical benefits. We predict it to be cost-effective in the UK in the medium term. However, this intervention might be cost-effective far sooner, and even cost-saving, in settings where long-term health-care costs of late-diagnosed patients in high

  8. Model parameter updating using Bayesian networks

    International Nuclear Information System (INIS)

    Treml, C.A.; Ross, Timothy J.

    2004-01-01

    This paper outlines a model parameter updating technique for a new method of model validation using a modified model reference adaptive control (MRAC) framework with Bayesian Networks (BNs). The model parameter updating within this method is generic in the sense that the model/simulation to be validated is treated as a black box. It must have updateable parameters to which its outputs are sensitive, and those outputs must have metrics that can be compared to that of the model reference, i.e., experimental data. Furthermore, no assumptions are made about the statistics of the model parameter uncertainty, only upper and lower bounds need to be specified. This method is designed for situations where a model is not intended to predict a complete point-by-point time domain description of the item/system behavior; rather, there are specific points, features, or events of interest that need to be predicted. These specific points are compared to the model reference derived from actual experimental data. The logic for updating the model parameters to match the model reference is formed via a BN. The nodes of this BN consist of updateable model input parameters and the specific output values or features of interest. Each time the model is executed, the input/output pairs are used to adapt the conditional probabilities of the BN. Each iteration further refines the inferred model parameters to produce the desired model output. After parameter updating is complete and model inputs are inferred, reliabilities for the model output are supplied. Finally, this method is applied to a simulation of a resonance control cooling system for a prototype coupled cavity linac. The results are compared to experimental data.

  9. Cost-effectiveness of community screening for glaucoma in rural India: a decision analytical model.

    Science.gov (United States)

    John, D; Parikh, R

    2018-02-01

    Studies in several countries have demonstrated the cost-effectiveness of population-based screening for glaucoma when targeted at high-risk groups such as older adults and with familial history of disease. This study conducts a cost-effective analysis of a hypothetical community screening and subsequent treatment programme in comparison to opportunistic case finding for glaucoma in rural India. A hypothetical screening programme for both primary open-angle glaucoma and angle-closure disease was built for a population aged between 40 and 69 years in rural areas of India. A decision analytical model was built to model events, costs and treatment pathways with and without a hypothetical screening programme for glaucoma for a rural-based population aged between 40 and 69 years in India. The treatment pathway included both primary open-angle glaucoma and angle-closure disease. The data on costs of screening and treatment were provided by an administrator of a tertiary eye hospital in Eastern India. The probabilities for the screening and treatment pathway were derived from published literature and a glaucoma specialist. The glaucoma prevalence rates were adapted from the Chennai Glaucoma Study findings. An incremental cost-effectiveness ratio value of ₹7292.30 per quality-adjusted life-year was calculated for a community-screening programme for glaucoma in rural India. The community screening for glaucoma would treat an additional 2872 cases and prevent 2190 person-years of blindness over a 10-year period. Community screening for glaucoma in rural India appears to be cost-effective when judged by a ratio of willingness-to-pay thresholds as per WHO-CHOICE guidelines. For community screening to be cost-effective, adequate resources, such as trained medical personnel and equipment would need to be made available. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  10. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Advances in Bayesian Modeling in Educational Research

    Science.gov (United States)

    Levy, Roy

    2016-01-01

    In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…

  12. Ecological validity of cost-effectiveness models of universal HPV vaccination: A systematic literature review.

    Science.gov (United States)

    Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil

    2017-05-09

    The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been

  13. Cost-effectiveness analysis of a patient-centered care model for management of psoriasis.

    Science.gov (United States)

    Parsi, Kory; Chambers, Cindy J; Armstrong, April W

    2012-04-01

    Cost-effectiveness analyses help policymakers make informed decisions regarding funding allocation of health care resources. Cost-effectiveness analysis of technology-enabled models of health care delivery is necessary to assess sustainability of novel online, patient-centered health care models. We sought to compare cost-effectiveness of conventional in-office care with a patient-centered, online model for follow-up treatment of patients with psoriasis. Cost-effectiveness analysis was performed from a societal perspective on a randomized controlled trial comparing a patient-centered online model with in-office visits for treatment of patients with psoriasis during a 24-week period. Quality-adjusted life expectancy was calculated using the life table method. Costs were generated from the original study parameters and national averages for salaries and services. No significant difference existed in the mean change in Dermatology Life Quality Index scores between the two groups (online: 3.51 ± 4.48 and in-office: 3.88 ± 6.65, P value = .79). Mean improvement in quality-adjusted life expectancy was not significantly different between the groups (P value = .93), with a gain of 0.447 ± 0.48 quality-adjusted life years for the online group and a gain of 0.463 ± 0.815 quality-adjusted life years for the in-office group. The cost of follow-up psoriasis care with online visits was 1.7 times less than the cost of in-person visits ($315 vs $576). Variations in travel time existed among patients depending on their distance from the dermatologist's office. From a societal perspective, the patient-centered online care model appears to be cost saving, while maintaining similar effectiveness to standard in-office care. Copyright © 2011 American Academy of Dermatology, Inc. Published by Mosby, Inc. All rights reserved.

  14. Cost-effectiveness analysis of countermeasures using accident consequence assessment models

    International Nuclear Information System (INIS)

    Alonso, A.; Gallego, E.

    1987-01-01

    In the event of a large release of radionuclides from a nuclear power plant, protective actions for the population potentially affected must be implemented. Cost-effectiveness analysis will be useful to define the countermeasures and the criteria needed to implement them. This paper shows the application of Accident Consequence Assessment (ACA) models to cost-effectiveness analysis of emergency and long-term countermeasures, making use of the different relationships between dose, contamination levels, affected areas and population distribution, included in such a model. The procedure is illustrated with the new Melcor Accident Consequence Code System (MACCS 1.3), developed at Sandia National Laboratories (USA), for a fixed accident scenario. Different alternative actions are evaluated with regard to their radiological and economical impact, searching for an 'optimum' strategy. (author)

  15. Cost-effective degradation test plan for a nonlinear random-coefficients model

    International Nuclear Information System (INIS)

    Kim, Seong-Joon; Bae, Suk Joo

    2013-01-01

    The determination of requisite sample size and the inspection schedule considering both testing cost and accuracy has been an important issue in the degradation test. This paper proposes a cost-effective degradation test plan in the context of a nonlinear random-coefficients model, while meeting some precision constraints for failure-time distribution. We introduce a precision measure to quantify the information losses incurred by reducing testing resources. The precision measure is incorporated into time-varying cost functions to reflect real circumstances. We apply a hybrid genetic algorithm to general cost optimization problem with reasonable constraints on the level of testing precision in order to determine a cost-effective inspection scheme. The proposed method is applied to the degradation data of plasma display panels (PDPs) following a bi-exponential degradation model. Finally, sensitivity analysis via simulation is provided to evaluate the robustness of the proposed degradation test plan.

  16. Cost-effectiveness of screening for HIV in primary care: a health economics modelling analysis

    OpenAIRE

    Baggaley, R. F.; Irvine, M. A.; Leber, W.; Cambiano, V.; Figueroa, J.; McMullen, H.; Anderson, J.; Santos, A. C.; Terris-Prestholt, F.; Miners, A.; Hollingsworth, T. D.; Griffiths, C. J.

    2017-01-01

    BACKGROUND: Early HIV diagnosis reduces morbidity, mortality, the probability of onward transmission, and their associated costs, but might increase cost because of earlier initiation of antiretroviral treatment (ART). We investigated this trade-off by estimating the cost-effectiveness of HIV screening in primary care. METHODS: We modelled the effect of the four-times higher diagnosis rate observed in the intervention arm of the RHIVA2 randomised controlled trial done in Hackney, London (UK),...

  17. Cost-effectiveness of screening for HIV in primary care: a health economics modelling analysis

    OpenAIRE

    Baggaley, Rebecca F; Irvine, Michael A; Leber, Werner; Cambiano, Valentina; Figueroa, Jose; McMullen, Heather; Anderson, Jane; Santos, Andreia C; Terris-Prestholt, Fern; Miners, Alec; Hollingsworth, T Déirdre; Griffiths, Chris J

    2017-01-01

    Summary Background Early HIV diagnosis reduces morbidity, mortality, the probability of onward transmission, and their associated costs, but might increase cost because of earlier initiation of antiretroviral treatment (ART). We investigated this trade-off by estimating the cost-effectiveness of HIV screening in primary care. Methods We modelled the effect of the four-times higher diagnosis rate observed in the intervention arm of the RHIVA2 randomised controlled trial done in Hackney, London...

  18. Bayesian inference of chemical kinetic models from proposed reactions

    KAUST Repository

    Galagali, Nikhil; Marzouk, Youssef M.

    2015-01-01

    © 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model

  19. Effectiveness and cost-effectiveness of antidepressants in primary care: a multiple treatment comparison meta-analysis and cost-effectiveness model.

    Directory of Open Access Journals (Sweden)

    Joakim Ramsberg

    Full Text Available OBJECTIVE: To determine effectiveness and cost-effectiveness over a one-year time horizon of pharmacological first line treatment in primary care for patients with moderate to severe depression. DESIGN: A multiple treatment comparison meta-analysis was employed to determine the relative efficacy in terms of remission of 10 antidepressants (citalopram, duloxetine escitalopram, fluoxetine, fluvoxamine mirtazapine, paroxetine, reboxetine, sertraline and venlafaxine. The estimated remission rates were then applied in a decision-analytic model in order to estimate costs and quality of life with different treatments at one year. DATA SOURCES: Meta-analyses of remission rates from randomised controlled trials, and cost and quality-of-life data from published sources. RESULTS: The most favourable pharmacological treatment in terms of remission was escitalopram with an 8- to 12-week probability of remission of 0.47. Despite a high acquisition cost, this clinical effectiveness translated into escitalopram being both more effective and having a lower total cost than all other comparators from a societal perspective. From a healthcare perspective, the cost per QALY of escitalopram was €3732 compared with venlafaxine. CONCLUSION: Of the investigated antidepressants, escitalopram has the highest probability of remission and is the most effective and cost-effective pharmacological treatment in a primary care setting, when evaluated over a one year time-horizon. Small differences in remission rates may be important when assessing costs and cost-effectiveness of antidepressants.

  20. Comparing the relative cost-effectiveness of diagnostic studies: a new model

    International Nuclear Information System (INIS)

    Patton, D.D.; Woolfenden, J.M.; Wellish, K.L.

    1986-01-01

    We have developed a model to compare the relative cost-effectiveness of two or more diagnostic tests. The model defines a cost-effectiveness ratio (CER) for a diagnostic test as the ratio of effective cost to base cost, only dollar costs considered. Effective cost includes base cost, cost of dealing with expected side effects, and wastage due to imperfect test performance. Test performance is measured by diagnostic utility (DU), a measure of test outcomes incorporating the decision-analytic variables sensitivity, specificity, equivocal fraction, disease probability, and outcome utility. Each of these factors affecting DU, and hence CER, is a local, not universal, value; these local values strongly affect CER, which in effect becomes a property of the local medical setting. When DU = +1 and there are no adverse effects, CER = 1 and the patient benefits from the test dollar for dollar. When there are adverse effects effective cost exceeds base cost, and for an imperfect test DU 1. As DU approaches 0 (worthless test), CER approaches infinity (no effectiveness at any cost). If DU is negative, indicating that doing the test at all would be detrimental, CER also becomes negative. We conclude that the CER model is a useful preliminary method for ranking the relative cost-effectiveness of diagnostic tests, and that the comparisons would best be done using local values; different groups might well arrive at different rankings. (Author)

  1. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  2. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  3. Effectiveness and cost-effectiveness of an awareness campaign for colorectal cancer: a mathematical modeling study.

    Science.gov (United States)

    Whyte, Sophie; Harnan, Susan

    2014-06-01

    A campaign to increase the awareness of the signs and symptoms of colorectal cancer (CRC) and encourage self-presentation to a GP was piloted in two regions of England in 2011. Short-term data from the pilot evaluation on campaign cost and changes in GP attendances/referrals, CRC incidence, and CRC screening uptake were available. The objective was to estimate the effectiveness and cost-effectiveness of a CRC awareness campaign by using a mathematical model which extrapolates short-term outcomes to predict long-term impacts on cancer mortality, quality-adjusted life-years (QALYs), and costs. A mathematical model representing England (aged 30+) for a lifetime horizon was developed. Long-term changes to cancer incidence, cancer stage distribution, cancer mortality, and QALYs were estimated. Costs were estimated incorporating costs associated with delivering the campaign, additional GP attendances, and changes in CRC treatment. Data from the pilot campaign suggested that the awareness campaign caused a 1-month 10 % increase in presentation rates. Based on this, the model predicted the campaign to cost £5.5 million, prevent 66 CRC deaths and gain 404 QALYs. The incremental cost-effectiveness ratio compared to "no campaign" was £13,496 per QALY. Results were sensitive to the magnitude and duration of the increase in presentation rates and to disease stage. The effectiveness and cost-effectiveness of a cancer awareness campaign can be estimated based on short-term data. Such predictions will aid policy makers in prioritizing between cancer control strategies. Future cost-effectiveness studies would benefit from campaign evaluations reporting as follows: data completeness, duration of impact, impact on emergency presentations, and comparison with non-intervention regions.

  4. Cost Effectiveness of HPV Vaccination: A Systematic Review of Modelling Approaches.

    Science.gov (United States)

    Pink, Joshua; Parker, Ben; Petrou, Stavros

    2016-09-01

    A large number of economic evaluations have been published that assess alternative possible human papillomavirus (HPV) vaccination strategies. Understanding differences in the modelling methodologies used in these studies is important to assess the accuracy, comparability and generalisability of their results. The aim of this review was to identify published economic models of HPV vaccination programmes and understand how characteristics of these studies vary by geographical area, date of publication and the policy question being addressed. We performed literature searches in MEDLINE, Embase, Econlit, The Health Economic Evaluations Database (HEED) and The National Health Service Economic Evaluation Database (NHS EED). From the 1189 unique studies retrieved, 65 studies were included for data extraction based on a priori eligibility criteria. Two authors independently reviewed these articles to determine eligibility for the final review. Data were extracted from the selected studies, focussing on six key structural or methodological themes covering different aspects of the model(s) used that may influence cost-effectiveness results. More recently published studies tend to model a larger number of HPV strains, and include a larger number of HPV-associated diseases. Studies published in Europe and North America also tend to include a larger number of diseases and are more likely to incorporate the impact of herd immunity and to use more realistic assumptions around vaccine efficacy and coverage. Studies based on previous models often do not include sufficiently robust justifications as to the applicability of the adapted model to the new context. The considerable between-study heterogeneity in economic evaluations of HPV vaccination programmes makes comparisons between studies difficult, as observed differences in cost effectiveness may be driven by differences in methodology as well as by variations in funding and delivery models and estimates of model parameters

  5. Cost-effectiveness of new pneumococcal conjugate vaccines in Turkey: a decision analytical model

    Directory of Open Access Journals (Sweden)

    Bakır Mustafa

    2012-11-01

    Full Text Available Abstract Background Streptococcus pneumoniae infections, which place a considerable burden on healthcare resources, can be reduced in a cost-effective manner using a 7-valent pneumococcal conjugate vaccine (PCV-7. We compare the cost effectiveness of a 13-valent PCV (PCV-13 and a 10-valent pneumococcal non-typeable Haemophilus influenzae protein D conjugate vaccine (PHiD-CV with that of PCV-7 in Turkey. Methods A cost-utility analysis was conducted and a decision analytical model was used to estimate the proportion of the Turkish population Results PCV-13 and PHiD-CV are projected to have a substantial impact on pneumococcal disease in Turkey versus PCV-7, with 2,223 and 3,156 quality-adjusted life years (QALYs and 2,146 and 2,081 life years, respectively, being saved under a 3+1 schedule. Projections of direct medical costs showed that a PHiD-CV vaccination programme would provide the greatest cost savings, offering additional savings of US$11,718,813 versus PCV-7 and US$8,235,010 versus PCV-13. Probabilistic sensitivity analysis showed that PHiD-CV dominated PCV-13 in terms of QALYs gained and cost savings in 58.3% of simulations. Conclusion Under the modeled conditions, PHiD-CV would provide the most cost-effective intervention for reducing pneumococcal disease in Turkish children.

  6. The cost-effectiveness of the Olweus Bullying Prevention Program: Results from a modelling study.

    Science.gov (United States)

    Beckman, Linda; Svensson, Mikael

    2015-12-01

    Exposure to bullying affects around 3-5 percent of adolescents in secondary school and is related to various mental health problems. Many different anti-bullying programmes are currently available, but economic evaluations are lacking. The aim of this study is to identify the cost effectiveness of the Olweus Bullying Prevention Program (OBPP). We constructed a decision-tree model for a Swedish secondary school, using a public payer perspective, and retrieved data on costs and effects from the published literature. Probabilistic sensitivity analysis to reflect the uncertainty in the model was conducted. The base-case analysis showed that using the OBPP to reduce the number of victims of bullying costs 131,250 Swedish kronor (€14,470) per victim spared. Compared to a relevant threshold of the societal value of bullying reduction, this indicates that the programme is cost-effective. Using a relevant willingness-to-pay threshold shows that the OBPP is a cost-effective intervention. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  7. A Cost-Effective Tracking Algorithm for Hypersonic Glide Vehicle Maneuver Based on Modified Aerodynamic Model

    Directory of Open Access Journals (Sweden)

    Yu Fan

    2016-10-01

    Full Text Available In order to defend the hypersonic glide vehicle (HGV, a cost-effective single-model tracking algorithm using Cubature Kalman filter (CKF is proposed in this paper based on modified aerodynamic model (MAM as process equation and radar measurement model as measurement equation. In the existing aerodynamic model, the two control variables attack angle and bank angle cannot be measured by the existing radar equipment and their control laws cannot be known by defenders. To establish the process equation, the MAM for HGV tracking is proposed by using additive white noise to model the rates of change of the two control variables. For the ease of comparison several multiple model algorithms based on CKF are presented, including interacting multiple model (IMM algorithm, adaptive grid interacting multiple model (AGIMM algorithm and hybrid grid multiple model (HGMM algorithm. The performances of these algorithms are compared and analyzed according to the simulation results. The simulation results indicate that the proposed tracking algorithm based on modified aerodynamic model has the best tracking performance with the best accuracy and least computational cost among all tracking algorithms in this paper. The proposed algorithm is cost-effective for HGV tracking.

  8. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    Science.gov (United States)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  9. Cost effectiveness of ovarian reserve testing in in vitro fertilization: a Markov decision-analytic model.

    Science.gov (United States)

    Moolenaar, Lobke M; Broekmans, Frank J M; van Disseldorp, Jeroen; Fauser, Bart C J M; Eijkemans, Marinus J C; Hompes, Peter G A; van der Veen, Fulco; Mol, Ben Willem J

    2011-10-01

    To compare the cost effectiveness of ovarian reserve testing in in vitro fertilization (IVF). A Markov decision model based on data from the literature and original patient data. Decision analytic framework. Computer-simulated cohort of subfertile women aged 20 to 45 years who are eligible for IVF. [1] No treatment, [2] up to three cycles of IVF limited to women under 41 years and no ovarian reserve testing, [3] up to three cycles of IVF with dose individualization of gonadotropins according to ovarian reserve, and [4] up to three cycles of IVF with ovarian reserve testing and exclusion of expected poor responders after the first cycle, with no treatment scenario as the reference scenario. Cumulative live birth over 1 year, total costs, and incremental cost-effectiveness ratios. The cumulative live birth was 9.0% in the no treatment scenario, 54.8% for scenario 2, 70.6% for scenario 3 and 51.9% for scenario 4. Absolute costs per woman for these scenarios were €0, €6,917, €6,678, and €5,892 for scenarios 1, 2, 3, and 4, respectively. Incremental cost-effectiveness ratios (ICER) for scenarios 2, 3, and 4 were €15,166, €10,837, and €13,743 per additional live birth. Sensitivity analysis showed the model to be robust over a wide range of values. Individualization of the follicle-stimulating hormone dose according to ovarian reserve is likely to be cost effective in women who are eligible for IVF, but this effectiveness needs to be confirmed in randomized clinical trials. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. Cost and cost-effectiveness of tuberculosis treatment shortening: a model-based analysis.

    Science.gov (United States)

    Gomez, G B; Dowdy, D W; Bastos, M L; Zwerling, A; Sweeney, S; Foster, N; Trajman, A; Islam, M A; Kapiga, S; Sinanovic, E; Knight, G M; White, R G; Wells, W A; Cobelens, F G; Vassall, A

    2016-12-01

    Despite improvements in treatment success rates for tuberculosis (TB), current six-month regimen duration remains a challenge for many National TB Programmes, health systems, and patients. There is increasing investment in the development of shortened regimens with a number of candidates in phase 3 trials. We developed an individual-based decision analytic model to assess the cost-effectiveness of a hypothetical four-month regimen for first-line treatment of TB, assuming non-inferiority to current regimens of six-month duration. The model was populated using extensive, empirically-collected data to estimate the economic impact on both health systems and patients of regimen shortening for first-line TB treatment in South Africa, Brazil, Bangladesh, and Tanzania. We explicitly considered 'real world' constraints such as sub-optimal guideline adherence. From a societal perspective, a shortened regimen, priced at USD1 per day, could be a cost-saving option in South Africa, Brazil, and Tanzania, but would not be cost-effective in Bangladesh when compared to one gross domestic product (GDP) per capita. Incorporating 'real world' constraints reduces cost-effectiveness. Patient-incurred costs could be reduced in all settings. From a health service perspective, increased drug costs need to be balanced against decreased delivery costs. The new regimen would remain a cost-effective option, when compared to each countries' GDP per capita, even if new drugs cost up to USD7.5 and USD53.8 per day in South Africa and Brazil; this threshold was above USD1 in Tanzania and under USD1 in Bangladesh. Reducing the duration of first-line TB treatment has the potential for substantial economic gains from a patient perspective. The potential economic gains for health services may also be important, but will be context-specific and dependent on the appropriate pricing of any new regimen.

  11. Bayesian Networks for Modeling Dredging Decisions

    Science.gov (United States)

    2011-10-01

    years, that algorithms have been developed to solve these problems efficiently. Most modern Bayesian network software uses junction tree (a.k.a. join... software was used to develop the network . This is by no means an exhaustive list of Bayesian network applications, but it is representative of recent...characteristic node (SCN), state- defining node ( SDN ), effect node (EFN), or value node. The five types of nodes can be described as follows: ERDC/EL TR-11

  12. Fast model updating coupling Bayesian inference and PGD model reduction

    Science.gov (United States)

    Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic

    2018-04-01

    The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.

  13. When mechanism matters: Bayesian forecasting using models of ecological diffusion

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.

    2017-01-01

    Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.

  14. Vaginal microbicides save money: a model of cost-effectiveness in South Africa and the USA.

    Science.gov (United States)

    Verguet, S; Walsh, J A

    2010-06-01

    To determine the hypothetical cost-effectiveness of vaginal microbicides preventing male to female HIV transmission. A mathematical epidemiological and cost-effectiveness model using data from South Africa and the USA was used. The prospective 1-year-long intervention targeted a general population of women in a city of 1,000,000 inhabitants in two very different epidemiological settings, South Africa with a male HIV prevalence of 18.80% and the USA with a male HIV prevalence of 0.72%. The base case scenario assumes a microbicide effective at 55%, used in 30% of sexual episodes at a retail price for the public sector in South Africa of US$0.51 per use and in the USA of US$2.23 per use. In South Africa, over 1 year, the intervention would prevent 1908 infections, save US$6712 per infection averted as compared with antiretroviral treatment. In the USA, it would be more costly: over 1 year, the intervention would prevent 21 infections, amounting to a net cost per infection averted of US$405,077. However, in the setting of Washington DC, with a higher HIV prevalence, the same intervention would prevent 93 infections and save US$91,176 per infection averted. Sensitivity analyses were conducted and even a microbicide with a low effectiveness of 30% would still save healthcare costs in South Africa. A microbicide intervention is likely to be very cost-effective in a country undergoing a high-level generalised epidemic such as South Africa, but is unlikely to be cost-effective in a developed country presenting epidemiological features similar to the USA unless the male HIV prevalence exceeds 2.4%.

  15. Assessing the cost-effectiveness of electric vehicles in European countries using integrated modeling

    International Nuclear Information System (INIS)

    Seixas, J.; Simões, S.; Dias, L.; Kanudia, A.; Fortes, P.; Gargiulo, M.

    2015-01-01

    Electric vehicles (EVs) are considered alternatives to internal combustion engines due to their energy efficiency and contribution to CO 2 mitigation. The adoption of EVs depends on consumer preferences, including cost, social status and driving habits, although it is agreed that current and expected costs play a major role. We use a partial equilibrium model that minimizes total energy system costs to assess whether EVs can be a cost-effective option for the consumers of each EU27 member state up to 2050, focusing on the impact of different vehicle investment costs and CO 2 mitigation targets. We found that for an EU-wide greenhouse gas emission reduction cap of 40% and 70% by 2050 vis-à-vis 1990 emissions, battery electric vehicles (BEVs) are cost-effective in the EU only by 2030 and only if their costs are 30% lower than currently expected. At the EU level, vehicle costs and the capability to deliver both short- and long-distance mobility are the main drivers of BEV deployment. Other drivers include each state’s national mobility patterns and the cost-effectiveness of alternative mitigation options, both in the transport sector, such as plug-in hybrid electric vehicles (PHEVs) or biofuels, and in other sectors, such as renewable electricity. - Highlights: • Electric vehicles were assessed through the minimization of the total energy systems costs. • EU climate policy targets could act as a major driver for PHEV adoption. • Battery EV is an option before 2030 if costs will drop by 30% from expected costs. • EV deployment varies per country depending on each energy system configuration. • Incentives at the country level should consider specific cost-effectiveness factors

  16. Bayesian hierarchical modelling of North Atlantic windiness

    Science.gov (United States)

    Vanem, E.; Breivik, O. N.

    2013-03-01

    Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  17. Bayesian hierarchical modelling of North Atlantic windiness

    Directory of Open Access Journals (Sweden)

    E. Vanem

    2013-03-01

    Full Text Available Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  18. Tuberculosis active case finding in Cambodia: a pragmatic, cost-effectiveness comparison of three implementation models.

    Science.gov (United States)

    James, Richard; Khim, Keovathanak; Boudarene, Lydia; Yoong, Joanne; Phalla, Chea; Saint, Saly; Koeut, Pichenda; Mao, Tan Eang; Coker, Richard; Khan, Mishal Sameer

    2017-08-22

    Globally, almost 40% of tuberculosis (TB) patients remain undiagnosed, and those that are diagnosed often experience prolonged delays before initiating correct treatment, leading to ongoing transmission. While there is a push for active case finding (ACF) to improve early detection and treatment of TB, there is extremely limited evidence about the relative cost-effectiveness of different ACF implementation models. Cambodia presents a unique opportunity for addressing this gap in evidence as ACF has been implemented using different models, but no comparisons have been conducted. The objective of our study is to contribute to knowledge and methodology on comparing cost-effectiveness of alternative ACF implementation models from the health service perspective, using programmatic data, in order to inform national policy and practice. We retrospectively compared three distinct ACF implementation models - door to door symptom screening in urban slums, checking contacts of TB patients, and door to door symptom screening focusing on rural populations aged above 55 - in terms of the number of new bacteriologically-positive pulmonary TB cases diagnosed and the cost of implementation assuming activities are conducted by the national TB program of Cambodia. We calculated the cost per additional case detected using the alternative ACF models. Our analysis, which is the first of its kind for TB, revealed that the ACF model based on door to door screening in poor urban areas of Phnom Penh was the most cost-effective (249 USD per case detected, 737 cases diagnosed), followed by the model based on testing contacts of TB patients (308 USD per case detected, 807 cases diagnosed), and symptomatic screening of older rural populations (316 USD per case detected, 397 cases diagnosed). Our study provides new evidence on the relative effectiveness and economics of three implementation models for enhanced TB case finding, in line with calls for data from 'routine conditions' to be included

  19. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    Since the publication of the first edition, many new Bayesian tools and methods have been developed for space-time data analysis, the predictive modeling of health outcomes, and other spatial biostatistical areas...

  20. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei; Holbrook, Andrew; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak

    2017-01-01

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix

  1. Bayesian network modeling of operator's state recognition process

    International Nuclear Information System (INIS)

    Hatakeyama, Naoki; Furuta, Kazuo

    2000-01-01

    Nowadays we are facing a difficult problem of establishing a good relation between humans and machines. To solve this problem, we suppose that machine system need to have a model of human behavior. In this study we model the state cognition process of a PWR plant operator as an example. We use a Bayesian network as an inference engine. We incorporate the knowledge hierarchy in the Bayesian network and confirm its validity using the example of PWR plant operator. (author)

  2. Experiences in applying Bayesian integrative models in interdisciplinary modeling: the computational and human challenges

    DEFF Research Database (Denmark)

    Kuikka, Sakari; Haapasaari, Päivi Elisabet; Helle, Inari

    2011-01-01

    We review the experience obtained in using integrative Bayesian models in interdisciplinary analysis focusing on sustainable use of marine resources and environmental management tasks. We have applied Bayesian models to both fisheries and environmental risk analysis problems. Bayesian belief...... be time consuming and research projects can be difficult to manage due to unpredictable technical problems related to parameter estimation. Biology, sociology and environmental economics have their own scientific traditions. Bayesian models are becoming traditional tools in fisheries biology, where...

  3. Cost-effectiveness of interventions to promote physical activity: a modelling study.

    Directory of Open Access Journals (Sweden)

    Linda J Cobiac

    2009-07-01

    Full Text Available BACKGROUND: Physical inactivity is a key risk factor for chronic disease, but a growing number of people are not achieving the recommended levels of physical activity necessary for good health. Australians are no exception; despite Australia's image as a sporting nation, with success at the elite level, the majority of Australians do not get enough physical activity. There are many options for intervention, from individually tailored advice, such as counselling from a general practitioner, to population-wide approaches, such as mass media campaigns, but the most cost-effective mix of interventions is unknown. In this study we evaluate the cost-effectiveness of interventions to promote physical activity. METHODS AND FINDINGS: From evidence of intervention efficacy in the physical activity literature and evaluation of the health sector costs of intervention and disease treatment, we model the cost impacts and health outcomes of six physical activity interventions, over the lifetime of the Australian population. We then determine cost-effectiveness of each intervention against current practice for physical activity intervention in Australia and derive the optimal pathway for implementation. Based on current evidence of intervention effectiveness, the intervention programs that encourage use of pedometers (Dominant and mass media-based community campaigns (Dominant are the most cost-effective strategies to implement and are very likely to be cost-saving. The internet-based intervention program (AUS$3,000/DALY, the GP physical activity prescription program (AUS$12,000/DALY, and the program to encourage more active transport (AUS$20,000/DALY, although less likely to be cost-saving, have a high probability of being under a AUS$50,000 per DALY threshold. GP referral to an exercise physiologist (AUS$79,000/DALY is the least cost-effective option if high time and travel costs for patients in screening and consulting an exercise physiologist are considered

  4. Cost-effectiveness analysis in melanoma detection: A transition model applied to dermoscopy.

    Science.gov (United States)

    Tromme, Isabelle; Legrand, Catherine; Devleesschauwer, Brecht; Leiter, Ulrike; Suciu, Stefan; Eggermont, Alexander; Sacré, Laurine; Baurain, Jean-François; Thomas, Luc; Beutels, Philippe; Speybroeck, Niko

    2016-11-01

    The main aim of this study is to demonstrate how our melanoma disease model (MDM) can be used for cost-effectiveness analyses (CEAs) in the melanoma detection field. In particular, we used the data of two cohorts of Belgian melanoma patients to investigate the cost-effectiveness of dermoscopy. A MDM, previously constructed to calculate the melanoma burden, was slightly modified to be suitable for CEAs. Two cohorts of patients entered into the model to calculate morbidity, mortality and costs. These cohorts were constituted by melanoma patients diagnosed by dermatologists adequately, or not adequately, trained in dermoscopy. Effectiveness and costs were calculated for each cohort and compared. Effectiveness was expressed in quality-adjusted life years (QALYs), a composite measure depending on melanoma-related morbidity and mortality. Costs included costs of treatment and follow-up as well as costs of detection in non-melanoma patients and costs of excision and pathology of benign lesions excised to rule out melanoma. The result of our analysis concluded that melanoma diagnosis by dermatologists adequately trained in dermoscopy resulted in both a gain of QALYs (less morbidity and/or mortality) and a reduction in costs. This study demonstrates how our MDM can be used in CEAs in the melanoma detection field. The model and the methodology suggested in this paper were applied to two cohorts of Belgian melanoma patients. Their analysis concluded that adequate dermoscopy training is cost-effective. The results should be confirmed by a large-scale randomised study. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.

    Science.gov (United States)

    Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher

    2017-08-01

    The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.

  6. Combination of Bayesian Network and Overlay Model in User Modeling

    Directory of Open Access Journals (Sweden)

    Loc Nguyen

    2009-12-01

    Full Text Available The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals… which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition… but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.

  7. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  8. Bayesian graphical models for genomewide association studies.

    Science.gov (United States)

    Verzilli, Claudio J; Stallard, Nigel; Whittaker, John C

    2006-07-01

    As the extent of human genetic variation becomes more fully characterized, the research community is faced with the challenging task of using this information to dissect the heritable components of complex traits. Genomewide association studies offer great promise in this respect, but their analysis poses formidable difficulties. In this article, we describe a computationally efficient approach to mining genotype-phenotype associations that scales to the size of the data sets currently being collected in such studies. We use discrete graphical models as a data-mining tool, searching for single- or multilocus patterns of association around a causative site. The approach is fully Bayesian, allowing us to incorporate prior knowledge on the spatial dependencies around each marker due to linkage disequilibrium, which reduces considerably the number of possible graphical structures. A Markov chain-Monte Carlo scheme is developed that yields samples from the posterior distribution of graphs conditional on the data from which probabilistic statements about the strength of any genotype-phenotype association can be made. Using data simulated under scenarios that vary in marker density, genotype relative risk of a causative allele, and mode of inheritance, we show that the proposed approach has better localization properties and leads to lower false-positive rates than do single-locus analyses. Finally, we present an application of our method to a quasi-synthetic data set in which data from the CYP2D6 region are embedded within simulated data on 100K single-nucleotide polymorphisms. Analysis is quick (<5 min), and we are able to localize the causative site to a very short interval.

  9. A tutorial introduction to Bayesian models of cognitive development.

    Science.gov (United States)

    Perfors, Amy; Tenenbaum, Joshua B; Griffiths, Thomas L; Xu, Fei

    2011-09-01

    We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the what, the how, and the why of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for developmentalists. We emphasize a qualitative understanding of Bayesian inference, but also include information about additional resources for those interested in the cognitive science applications, mathematical foundations, or machine learning details in more depth. In addition, we discuss some important interpretation issues that often arise when evaluating Bayesian models in cognitive science. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Modelling of JET diagnostics using Bayesian Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J. [IPP Greifswald, Greifswald (Germany); Ford, O. [Imperial College, London (United Kingdom); McDonald, D.; Hole, M.; Nessi, G. von; Meakins, A.; Brix, M.; Thomsen, H.; Werner, A.; Sirinelli, A.

    2011-07-01

    The mapping between physics parameters (such as densities, currents, flows, temperatures etc) defining the plasma 'state' under a given model and the raw observations of each plasma diagnostic will 1) depend on the particular physics model used, 2) is inherently probabilistic, from uncertainties on both observations and instrumental aspects of the mapping, such as calibrations, instrument functions etc. A flexible and principled way of modelling such interconnected probabilistic systems is through so called Bayesian graphical models. Being an amalgam between graph theory and probability theory, Bayesian graphical models can simulate the complex interconnections between physics models and diagnostic observations from multiple heterogeneous diagnostic systems, making it relatively easy to optimally combine the observations from multiple diagnostics for joint inference on parameters of the underlying physics model, which in itself can be represented as part of the graph. At JET about 10 diagnostic systems have to date been modelled in this way, and has lead to a number of new results, including: the reconstruction of the flux surface topology and q-profiles without any specific equilibrium assumption, using information from a number of different diagnostic systems; profile inversions taking into account the uncertainties in the flux surface positions and a substantial increase in accuracy of JET electron density and temperature profiles, including improved pedestal resolution, through the joint analysis of three diagnostic systems. It is believed that the Bayesian graph approach could potentially be utilised for very large sets of diagnostics, providing a generic data analysis framework for nuclear fusion experiments, that would be able to optimally utilize the information from multiple diagnostics simultaneously, and where the explicit graph representation of the connections to underlying physics models could be used for sophisticated model testing. This

  11. Implementation of methadone therapy for opioid use disorder in Russia - a modeled cost-effectiveness analysis.

    Science.gov (United States)

    Idrisov, Bulat; Murphy, Sean M; Morrill, Tyler; Saadoun, Mayada; Lunze, Karsten; Shepard, Donald

    2017-01-20

    Opioid agonist therapy using methadone, an effective treatment of opioid use disorders (OUD) for people who inject drugs (PWID), is recommended by the World Health Organization as essential to curtail the growing HIV epidemic. Yet, despite increasing prevalence of OUD and HIV, methadone therapy has not yet been implemented in Russia. The aim of this modeling study was to estimate the cost-effectiveness of methadone therapy for Russian adults with a diagnosed OUD. We modeled the projected program implementation costs and estimated disability-adjusted life years (DALYs) averted over a 10-year period, associated with the provision of methadone therapy for a hypothetical, unreplenished cohort of Russian adults with an OUD (n = 249,000), in comparison to the current therapies at existing addiction treatment facilities. Our model compared four distinct scenarios of treatment coverage in the cohort ranging from 3.1 to 55%. Providing methadone therapy to as few as 3.1% of adults with an OUD amounted to an estimated almost 50,000 DALYs averted over 10 years at a cost of just over USD 17 million. Further expanding service coverage to 55% resulted in an estimated almost 900,000 DALYs averted, at a cost of about USD 308 million. Our study indicated that implementing opioid agonist therapy with methadone to treat OUD at existing facilities in Russia is highly cost-effective.

  12. Inventory model using bayesian dynamic linear model for demand forecasting

    Directory of Open Access Journals (Sweden)

    Marisol Valencia-Cárdenas

    2014-12-01

    Full Text Available An important factor of manufacturing process is the inventory management of terminated product. Constantly, industry is looking for better alternatives to establish an adequate plan of production and stored quantities, with optimal cost, getting quantities in a time horizon, which permits to define resources and logistics with anticipation, needed to distribute products on time. Total absence of historical data, required by many statistical models to forecast, demands the search for other kind of accurate techniques. This work presents an alternative that not only permits to forecast, in an adjusted way, but also, to provide optimal quantities to produce and store with an optimal cost, using Bayesian statistics. The proposal is illustrated with real data. Palabras clave: estadística bayesiana, optimización, modelo de inventarios, modelo lineal dinámico bayesiano. Keywords: Bayesian statistics, opti

  13. A decision model for cost effective design of biomass based green energy supply chains.

    Science.gov (United States)

    Yılmaz Balaman, Şebnem; Selim, Hasan

    2015-09-01

    The core driver of this study is to deal with the design of anaerobic digestion based biomass to energy supply chains in a cost effective manner. In this concern, a decision model is developed. The model is based on fuzzy multi objective decision making in order to simultaneously optimize multiple economic objectives and tackle the inherent uncertainties in the parameters and decision makers' aspiration levels for the goals. The viability of the decision model is explored with computational experiments on a real-world biomass to energy supply chain and further analyses are performed to observe the effects of different conditions. To this aim, scenario analyses are conducted to investigate the effects of energy crop utilization and operational costs on supply chain structure and performance measures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A Bayesian alternative for multi-objective ecohydrological model specification

    Science.gov (United States)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior

  15. Cost-effectiveness of female human papillomavirus vaccination in 179 countries: a PRIME modelling study.

    Science.gov (United States)

    Jit, Mark; Brisson, Marc; Portnoy, Allison; Hutubessy, Raymond

    2014-07-01

    Introduction of human papillomavirus (HPV) vaccination in settings with the highest burden of HPV is not universal, partly because of the absence of quantitative estimates of country-specific effects on health and economic costs. We aimed to develop and validate a simple generic model of such effects that could be used and understood in a range of settings with little external support. We developed the Papillomavirus Rapid Interface for Modelling and Economics (PRIME) model to assess cost-effectiveness and health effects of vaccination of girls against HPV before sexual debut in terms of burden of cervical cancer and mortality. PRIME models incidence according to proposed vaccine efficacy against HPV 16/18, vaccine coverage, cervical cancer incidence and mortality, and HPV type distribution. It assumes lifelong vaccine protection and no changes to other screening programmes or vaccine uptake. We validated PRIME against existing reports of HPV vaccination cost-effectiveness, projected outcomes for 179 countries (assuming full vaccination of 12-year-old girls), and outcomes for 71 phase 2 GAVI-eligible countries (using vaccine uptake data from the GAVI Alliance). We assessed differences between countries in terms of cost-effectiveness and health effects. In validation, PRIME reproduced cost-effectiveness conclusions for 24 of 26 countries from 17 published studies, and for all 72 countries in a published study of GAVI-eligible countries. Vaccination of a cohort of 58 million 12-year-old girls in 179 countries prevented 690,000 cases of cervical cancer and 420,000 deaths during their lifetime (mostly in low-income or middle-income countries), at a net cost of US$4 billion. HPV vaccination was very cost effective (with every disability-adjusted life-year averted costing less than the gross domestic product per head) in 156 (87%) of 179 countries. Introduction of the vaccine in countries without national HPV vaccination at present would prevent substantially more cases

  16. Bayesian Estimation of the Logistic Positive Exponent IRT Model

    Science.gov (United States)

    Bolfarine, Heleno; Bazan, Jorge Luis

    2010-01-01

    A Bayesian inference approach using Markov Chain Monte Carlo (MCMC) is developed for the logistic positive exponent (LPE) model proposed by Samejima and for a new skewed Logistic Item Response Theory (IRT) model, named Reflection LPE model. Both models lead to asymmetric item characteristic curves (ICC) and can be appropriate because a symmetric…

  17. Using consensus bayesian network to model the reactive oxygen species regulatory pathway.

    Directory of Open Access Journals (Sweden)

    Liangdong Hu

    Full Text Available Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.

  18. Quantifying Multiscale Habitat Structural Complexity: A Cost-Effective Framework for Underwater 3D Modelling

    Directory of Open Access Journals (Sweden)

    Renata Ferrari

    2016-02-01

    Full Text Available Coral reef habitat structural complexity influences key ecological processes, ecosystem biodiversity, and resilience. Measuring structural complexity underwater is not trivial and researchers have been searching for accurate and cost-effective methods that can be applied across spatial extents for over 50 years. This study integrated a set of existing multi-view, image-processing algorithms, to accurately compute metrics of structural complexity (e.g., ratio of surface to planar area underwater solely from images. This framework resulted in accurate, high-speed 3D habitat reconstructions at scales ranging from small corals to reef-scapes (10s km2. Structural complexity was accurately quantified from both contemporary and historical image datasets across three spatial scales: (i branching coral colony (Acropora spp.; (ii reef area (400 m2; and (iii reef transect (2 km. At small scales, our method delivered models with <1 mm error over 90% of the surface area, while the accuracy at transect scale was 85.3% ± 6% (CI. Advantages are: no need for an a priori requirement for image size or resolution, no invasive techniques, cost-effectiveness, and utilization of existing imagery taken from off-the-shelf cameras (both monocular or stereo. This remote sensing method can be integrated to reef monitoring and improve our knowledge of key aspects of coral reef dynamics, from reef accretion to habitat provisioning and productivity, by measuring and up-scaling estimates of structural complexity.

  19. A model to estimate the cost effectiveness of the indoorenvironment improvements in office work

    Energy Technology Data Exchange (ETDEWEB)

    Seppanen, Olli; Fisk, William J.

    2004-06-01

    Deteriorated indoor climate is commonly related to increases in sick building syndrome symptoms, respiratory illnesses, sick leave, reduced comfort and losses in productivity. The cost of deteriorated indoor climate for the society is high. Some calculations show that the cost is higher than the heating energy costs of the same buildings. Also building-level calculations have shown that many measures taken to improve indoor air quality and climate are cost-effective when the potential monetary savings resulting from an improved indoor climate are included as benefits gained. As an initial step towards systemizing these building level calculations we have developed a conceptual model to estimate the cost-effectiveness of various measures. The model shows the links between the improvements in the indoor environment and the following potential financial benefits: reduced medical care cost, reduced sick leave, better performance of work, lower turn over of employees, and lower cost of building maintenance due to fewer complaints about indoor air quality and climate. The pathways to these potential benefits from changes in building technology and practices go via several human responses to the indoor environment such as infectious diseases, allergies and asthma, sick building syndrome symptoms, perceived air quality, and thermal environment. The model also includes the annual cost of investments, operation costs, and cost savings of improved indoor climate. The conceptual model illustrates how various factors are linked to each other. SBS symptoms are probably the most commonly assessed health responses in IEQ studies and have been linked to several characteristics of buildings and IEQ. While the available evidence indicates that SBS symptoms can affect these outcomes and suspects that such a linkage exists, at present we can not quantify the relationships sufficiently for cost-benefit modeling. New research and analyses of existing data to quantify the financial

  20. Modelling cost-effectiveness of different vasectomy methods in India, Kenya, and Mexico

    Directory of Open Access Journals (Sweden)

    Seamans Yancy

    2007-07-01

    Full Text Available Abstract Background Vasectomy is generally considered a safe and effective method of permanent contraception. The historical effectiveness of vasectomy has been questioned by recent research results indicating that the most commonly used method of vasectomy – simple ligation and excision (L and E – appears to have a relatively high failure rate, with reported pregnancy rates as high as 4%. Updated methods such as fascial interposition (FI and thermal cautery can lower the rate of failure but may require additional financial investments and may not be appropriate for low-resource clinics. In order to better compare the cost-effectiveness of these different vasectomy methods, we modelled the costs of different vasectomy methods using cost data collected in India, Kenya, and Mexico and effectiveness data from the latest published research. Methods The costs associated with providing vasectomies were determined in each country through interviews with clinic staff. Costs collected were economic, direct, programme costs of fixed vasectomy services but did not include large capital expenses or general recurrent costs for the health care facility. Estimates of the time required to provide service were gained through interviews and training costs were based on the total costs of vasectomy training programmes in each country. Effectiveness data were obtained from recent published studies and comparative cost-effectiveness was determined using cost per couple years of protection (CYP. Results In each country, the labour to provide the vasectomy and follow-up services accounts for the greatest portion of the overall cost. Because each country almost exclusively used one vasectomy method at all of the clinics included in the study, we modelled costs based on the additional material, labour, and training costs required in each country. Using a model of a robust vasectomy program, more effective methods such as FI and thermal cautery reduce the cost per

  1. Maritime piracy situation modelling with dynamic Bayesian networks

    CSIR Research Space (South Africa)

    Dabrowski, James M

    2015-05-01

    Full Text Available A generative model for modelling maritime vessel behaviour is proposed. The model is a novel variant of the dynamic Bayesian network (DBN). The proposed DBN is in the form of a switching linear dynamic system (SLDS) that has been extended into a...

  2. Bayesian inference model for fatigue life of laminated composites

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Kiureghian, Armen Der; Berggreen, Christian

    2016-01-01

    A probabilistic model for estimating the fatigue life of laminated composite plates is developed. The model is based on lamina-level input data, making it possible to predict fatigue properties for a wide range of laminate configurations. Model parameters are estimated by Bayesian inference. The ...

  3. Characterizing economic trends by Bayesian stochastic model specification search

    DEFF Research Database (Denmark)

    Grassi, Stefano; Proietti, Tommaso

    We extend a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. In particular, we focus on autoregressive models with possibly time-varying intercept and slope and decide on ...

  4. Bayesian Plackett-Luce Mixture Models for Partially Ranked Data.

    Science.gov (United States)

    Mollica, Cristina; Tardella, Luca

    2017-06-01

    The elicitation of an ordinal judgment on multiple alternatives is often required in many psychological and behavioral experiments to investigate preference/choice orientation of a specific population. The Plackett-Luce model is one of the most popular and frequently applied parametric distributions to analyze rankings of a finite set of items. The present work introduces a Bayesian finite mixture of Plackett-Luce models to account for unobserved sample heterogeneity of partially ranked data. We describe an efficient way to incorporate the latent group structure in the data augmentation approach and the derivation of existing maximum likelihood procedures as special instances of the proposed Bayesian method. Inference can be conducted with the combination of the Expectation-Maximization algorithm for maximum a posteriori estimation and the Gibbs sampling iterative procedure. We additionally investigate several Bayesian criteria for selecting the optimal mixture configuration and describe diagnostic tools for assessing the fitness of ranking distributions conditionally and unconditionally on the number of ranked items. The utility of the novel Bayesian parametric Plackett-Luce mixture for characterizing sample heterogeneity is illustrated with several applications to simulated and real preference ranked data. We compare our method with the frequentist approach and a Bayesian nonparametric mixture model both assuming the Plackett-Luce model as a mixture component. Our analysis on real datasets reveals the importance of an accurate diagnostic check for an appropriate in-depth understanding of the heterogenous nature of the partial ranking data.

  5. Bayesian inference of chemical kinetic models from proposed reactions

    KAUST Repository

    Galagali, Nikhil

    2015-02-01

    © 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.

  6. Modelling the cost-effectiveness of mass screening and treatment for reducing Plasmodium falciparum malaria burden

    Directory of Open Access Journals (Sweden)

    Crowell Valerie

    2013-01-01

    Full Text Available Abstract Background Past experience and modelling suggest that, in most cases, mass treatment strategies are not likely to succeed in interrupting Plasmodium falciparum malaria transmission. However, this does not preclude their use to reduce disease burden. Mass screening and treatment (MSAT is preferred to mass drug administration (MDA, as the latter involves massive over-use of drugs. This paper reports simulations of the incremental cost-effectiveness of well-conducted MSAT campaigns as a strategy for P. falciparum malaria disease-burden reduction in settings with varying receptivity (ability of the combined vector population in a setting to transmit disease and access to case management. Methods MSAT incremental cost-effectiveness ratios (ICERs were estimated in different sub-Saharan African settings using simulation models of the dynamics of malaria and a literature-based MSAT cost estimate. Imported infections were simulated at a rate of two per 1,000 population per annum. These estimates were compared to the ICERs of scaling up case management or insecticide-treated net (ITN coverage in each baseline health system, in the absence of MSAT. Results MSAT averted most episodes, and resulted in the lowest ICERs, in settings with a moderate level of disease burden. At a low pre-intervention entomological inoculation rate (EIR of two infectious bites per adult per annum (IBPAPA MSAT was never more cost-effective than scaling up ITNs or case management coverage. However, at pre-intervention entomological inoculation rates (EIRs of 20 and 50 IBPAPA and ITN coverage levels of 40 or 60%, respectively, the ICER of MSAT was similar to that of scaling up ITN coverage further. Conclusions In all the transmission settings considered, achieving a minimal level of ITN coverage is a “best buy”. At low transmission, MSAT probably is not worth considering. Instead, MSAT may be suitable at medium to high levels of transmission and at moderate ITN coverage

  7. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  8. Modelling the cost-effectiveness of impact-absorbing flooring in Swedish residential care facilities.

    Science.gov (United States)

    Ryen, Linda; Svensson, Mikael

    2016-06-01

    Fall-related injuries among the elderly, specifically hip fractures, cause significant morbidity and mortality as well as imposing a substantial financial cost on the health care system. Impact-absorbing flooring has been advocated as an effective method for preventing hip fractures resulting from falls. This study identifies the cost-effectiveness of impact-absorbing flooring compared to standard flooring in residential care facilities for the elderly in a Swedish setting. An incremental cost-effectiveness analysis was performed comparing impact-absorbing flooring to standard flooring using a Markov decision model. A societal perspective was adopted and incremental costs were compared to incremental gains in quality-adjusted life years (QALYs). Data on costs, probability transitions and health-related quality of life measures were retrieved from the published literature and from Swedish register data. Probabilistic sensitivity analysis was performed through a Monte Carlo simulation. The base-case analysis indicates that the impact-absorbing flooring reduces costs and increases QALYs. When allowing for uncertainty we find that 60% of the simulations indicate that impact-absorbing flooring is cost-saving compared to standard flooring and an additional 20% that it has a cost per QALY below a commonly used threshold value : Using a modelling approach, we find that impact-absorbing flooring is a dominant strategy at the societal level considering that it can save resources and improve health in a vulnerable population. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  9. Cost-effectiveness modeling for neuropathic pain treatments: investigating the relative importance of parameters using an open-source model.

    Science.gov (United States)

    Hirst, Matthew; Bending, Matthew W; Baio, Gianluca; Yesufu-Udechuku, Amina; Dunlop, William C N

    2018-06-08

    The study objective was to develop an open-source replicate of a cost-effectiveness model developed by National Institute for Health and Care (NICE) in order to explore uncertainties in health economic modeling of novel pharmacological neuropathic pain treatments. The NICE model, consisting of a decision tree with branches for discrete levels of pain relief and adverse event (AE) severities, was replicated using R and used to compare a hypothetical neuropathic pain drug to pregabalin. Model parameters were sourced from NICE's clinical guidelines and associated with probability distributions to account for underlying uncertainty. A simulation-based scenario analysis was conducted to assess how uncertainty in efficacy and AEs affected the net monetary benefit (NMB) for the hypothetical treatment at a cost-effectiveness threshold of £20,000 per QALY. Relative to pregabalin, an increase in efficacy was associated with greater NMB than an improvement in tolerability. A greater NMB was observed when efficacy was marginally higher than that of pregabalin while maintaining the same level of AEs than when efficacy was equivalent to pregabalin but with a more substantial reduction in AEs. In the latter scenario, the NMB was only positive at a low cost-effectiveness threshold. The replicate model shares the limitations described in the NICE guidelines. There is a lack of support in scientific literature for the assumption that increased efficacy is associated with a greater reduction in tolerability. The replicate model also included a single comparator, unlike the NICE model. Pain relief is a stronger driver of NMB than tolerability at a cost-effectiveness threshold of £20,000 per QALY. Health technology assessment decisions which are influenced by NICE's model may reward efficacy gains even if they are associated with more severe AEs. This contrasts with recommendations from clinical guidelines for neuropathic pain which place more equal weighting on improvements in

  10. Research & development and growth: A Bayesian model averaging analysis

    Czech Academy of Sciences Publication Activity Database

    Horváth, Roman

    2011-01-01

    Roč. 28, č. 6 (2011), s. 2669-2673 ISSN 0264-9993. [Society for Non-linear Dynamics and Econometrics Annual Conferencen. Washington DC, 16.03.2011-18.03.2011] R&D Projects: GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Keywords : Research and development * Growth * Bayesian model averaging Subject RIV: AH - Economic s Impact factor: 0.701, year: 2011 http://library.utia.cas.cz/separaty/2011/E/horvath-research & development and growth a bayesian model averaging analysis.pdf

  11. Bayesian inference with information content model check for Langevin equations

    DEFF Research Database (Denmark)

    Krog, Jens F. C.; Lomholt, Michael Andersen

    2017-01-01

    The Bayesian data analysis framework has been proven to be a systematic and effective method of parameter inference and model selection for stochastic processes. In this work we introduce an information content model check which may serve as a goodness-of-fit, like the chi-square procedure...

  12. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  13. Involving stakeholders in building integrated fisheries models using Bayesian methods

    DEFF Research Database (Denmark)

    Haapasaari, Päivi Elisabet; Mäntyniemi, Samu; Kuikka, Sakari

    2013-01-01

    the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology...

  14. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  15. Bayesian Network Models in Cyber Security: A Systematic Review

    NARCIS (Netherlands)

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these

  16. A Bayesian Model of the Memory Colour Effect.

    Science.gov (United States)

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration. Here, we model memory colour effects using prior knowledge about typical colours as priors for the grey adjustments in a Bayesian model. This simple model does not involve any fitting of free parameters. The Bayesian model roughly captured the magnitude of the measured memory colour effect for photographs of objects. To some extent, the model predicted observed differences in memory colour effects across objects. The model could not account for the differences in memory colour effects across different levels of realism in the object images. The Bayesian model provides a particularly simple account of memory colour effects, capturing some of the multiple sources of variation of these effects.

  17. A Systematic Review of Cost-Effectiveness Models in Type 1 Diabetes Mellitus.

    Science.gov (United States)

    Henriksson, Martin; Jindal, Ramandeep; Sternhufvud, Catarina; Bergenheim, Klas; Sörstadius, Elisabeth; Willis, Michael

    2016-06-01

    Critiques of cost-effectiveness modelling in type 1 diabetes mellitus (T1DM) are scarce and are often undertaken in combination with type 2 diabetes mellitus (T2DM) models. However, T1DM is a separate disease, and it is therefore important to appraise modelling methods in T1DM. This review identified published economic models in T1DM and provided an overview of the characteristics and capabilities of available models, thus enabling a discussion of best-practice modelling approaches in T1DM. A systematic review of Embase(®), MEDLINE(®), MEDLINE(®) In-Process, and NHS EED was conducted to identify available models in T1DM. Key conferences and health technology assessment (HTA) websites were also reviewed. The characteristics of each model (e.g. model structure, simulation method, handling of uncertainty, incorporation of treatment effect, data for risk equations, and validation procedures, based on information in the primary publication) were extracted, with a focus on model capabilities. We identified 13 unique models. Overall, the included studies varied greatly in scope as well as in the quality and quantity of information reported, but six of the models (Archimedes, CDM [Core Diabetes Model], CRC DES [Cardiff Research Consortium Discrete Event Simulation], DCCT [Diabetes Control and Complications Trial], Sheffield, and EAGLE [Economic Assessment of Glycaemic control and Long-term Effects of diabetes]) were the most rigorous and thoroughly reported. Most models were Markov based, and cohort and microsimulation methods were equally common. All of the more comprehensive models employed microsimulation methods. Model structure varied widely, with the more holistic models providing a comprehensive approach to microvascular and macrovascular events, as well as including adverse events. The majority of studies reported a lifetime horizon, used a payer perspective, and had the capability for sensitivity analysis. Several models have been developed that provide useful

  18. Evaluation on the cost-effective threshold of osteoporosis treatment on elderly women in China using discrete event simulation model.

    Science.gov (United States)

    Ni, W; Jiang, Y

    2017-02-01

    This study used a simulation model to determine the cost-effective threshold of fracture risk to treat osteoporosis among elderly Chinese women. Osteoporosis treatment is cost-effective among average-risk women who are at least 75 years old and above-average-risk women who are younger than 75 years old. Aging of the Chinese population is imposing increasing economic burden of osteoporosis. This study evaluated the cost-effectiveness of osteoporosis treatment among the senior Chinese women population. A discrete event simulation model using age-specific probabilities of hip fracture, clinical vertebral fracture, wrist fracture, humerus fracture, and other fracture; costs (2015 US dollars); and quality-adjusted life years (QALYs) was used to assess the cost-effectiveness of osteoporosis treatment. Incremental cost-effectiveness ratio (ICER) was calculated. The willingness to pay (WTP) for a QALY in China was compared with the calculated ICER to decide the cost-effectiveness. To determine the absolute 10-year hip fracture probability at which the osteoporosis treatment became cost-effective, average age-specific probabilities for all fractures were multiplied by a relative risk (RR) that was systematically varied from 0 to 10 until the WTP threshold was observed for treatment relative to no intervention. Sensitivity analyses were also performed to evaluate the impacts from WTP and annual treatment costs. In baseline analysis, simulated ICERs were higher than the WTP threshold among Chinese women younger than 75, but much lower than the WTP among the older population. Sensitivity analyses indicated that cost-effectiveness could vary due to a higher WTP threshold or a lower annual treatment cost. A 30 % increase in WTP or a 30 % reduction in annual treatment costs will make osteoporosis treatment cost-effective for Chinese women population from 55 to 85. The current study provides evidence that osteoporosis treatment is cost-effective among a subpopulation of

  19. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  20. Bayesian Network Models in Cyber Security: A Systematic Review

    OpenAIRE

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 9 different criteri...

  1. The cost and impact of scaling up pre-exposure prophylaxis for HIV prevention: a systematic review of cost-effectiveness modelling studies

    NARCIS (Netherlands)

    Gomez, Gabriela B.; Borquez, Annick; Case, Kelsey K.; Wheelock, Ana; Vassall, Anna; Hankins, Catherine

    2013-01-01

    Cost-effectiveness studies inform resource allocation, strategy, and policy development. However, due to their complexity, dependence on assumptions made, and inherent uncertainty, synthesising, and generalising the results can be difficult. We assess cost-effectiveness models evaluating expected

  2. Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.

    Science.gov (United States)

    Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David

    2017-01-01

    We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.

  3. Development of dynamic Bayesian models for web application test management

    Science.gov (United States)

    Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.

    2018-03-01

    The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.

  4. Spatial and spatio-temporal bayesian models with R - INLA

    CERN Document Server

    Blangiardo, Marta

    2015-01-01

    Dedication iiiPreface ix1 Introduction 11.1 Why spatial and spatio-temporal statistics? 11.2 Why do we use Bayesian methods for modelling spatial and spatio-temporal structures? 21.3 Why INLA? 31.4 Datasets 32 Introduction to 212.1 The language 212.2 objects 222.3 Data and session management 342.4 Packages 352.5 Programming in 362.6 Basic statistical analysis with 393 Introduction to Bayesian Methods 533.1 Bayesian Philosophy 533.2 Basic Probability Elements 573.3 Bayes Theorem 623.4 Prior and Posterior Distributions 643.5 Working with the Posterior Distribution 663.6 Choosing the Prior Distr

  5. Bayesian log-periodic model for financial crashes

    DEFF Research Database (Denmark)

    Rodríguez-Caballero, Carlos Vladimir; Knapik, Oskar

    2014-01-01

    This paper introduces a Bayesian approach in econophysics literature about financial bubbles in order to estimate the most probable time for a financial crash to occur. To this end, we propose using noninformative prior distributions to obtain posterior distributions. Since these distributions...... cannot be performed analytically, we develop a Markov Chain Monte Carlo algorithm to draw from posterior distributions. We consider three Bayesian models that involve normal and Student’s t-distributions in the disturbances and an AR(1)-GARCH(1,1) structure only within the first case. In the empirical...... part of the study, we analyze a well-known example of financial bubble – the S&P 500 1987 crash – to show the usefulness of the three methods under consideration and crashes of Merval-94, Bovespa-97, IPCMX-94, Hang Seng-97 using the simplest method. The novelty of this research is that the Bayesian...

  6. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  7. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  8. Advanced REACH tool: A Bayesian model for occupational exposure assessment

    NARCIS (Netherlands)

    McNally, K.; Warren, N.; Fransman, W.; Entink, R.K.; Schinkel, J.; Van Tongeren, M.; Cherrie, J.W.; Kromhout, H.; Schneider, T.; Tielemans, E.

    2014-01-01

    This paper describes a Bayesian model for the assessment of inhalation exposures in an occupational setting; the methodology underpins a freely available web-based application for exposure assessment, the Advanced REACH Tool (ART). The ART is a higher tier exposure tool that combines disparate

  9. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  10. Cost-effectiveness of rotavirus vaccination in the Netherlands; the results of a consensus model

    NARCIS (Netherlands)

    Rozenbaum, M.H.; Mangen, M.J.J.; Giaquinto, C.; Wilschut, J.C.; Hak, E.; Postma, M.J.

    2011-01-01

    Background: Each year rotavirus gastroenteritis results in thousands of paediatric hospitalisations and primary care visits in the Netherlands. While two vaccines against rotavirus are registered, routine immunisation of infants has not yet been implemented. Existing cost-effectiveness studies

  11. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  12. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  13. Bayesian inference method for stochastic damage accumulation modeling

    International Nuclear Information System (INIS)

    Jiang, Xiaomo; Yuan, Yong; Liu, Xian

    2013-01-01

    Damage accumulation based reliability model plays an increasingly important role in successful realization of condition based maintenance for complicated engineering systems. This paper developed a Bayesian framework to establish stochastic damage accumulation model from historical inspection data, considering data uncertainty. Proportional hazards modeling technique is developed to model the nonlinear effect of multiple influencing factors on system reliability. Different from other hazard modeling techniques such as normal linear regression model, the approach does not require any distribution assumption for the hazard model, and can be applied for a wide variety of distribution models. A Bayesian network is created to represent the nonlinear proportional hazards models and to estimate model parameters by Bayesian inference with Markov Chain Monte Carlo simulation. Both qualitative and quantitative approaches are developed to assess the validity of the established damage accumulation model. Anderson–Darling goodness-of-fit test is employed to perform the normality test, and Box–Cox transformation approach is utilized to convert the non-normality data into normal distribution for hypothesis testing in quantitative model validation. The methodology is illustrated with the seepage data collected from real-world subway tunnels.

  14. A Bayesian Markov geostatistical model for estimation of hydrogeological properties

    International Nuclear Information System (INIS)

    Rosen, L.; Gustafson, G.

    1996-01-01

    A geostatistical methodology based on Markov-chain analysis and Bayesian statistics was developed for probability estimations of hydrogeological and geological properties in the siting process of a nuclear waste repository. The probability estimates have practical use in decision-making on issues such as siting, investigation programs, and construction design. The methodology is nonparametric which makes it possible to handle information that does not exhibit standard statistical distributions, as is often the case for classified information. Data do not need to meet the requirements on additivity and normality as with the geostatistical methods based on regionalized variable theory, e.g., kriging. The methodology also has a formal way for incorporating professional judgments through the use of Bayesian statistics, which allows for updating of prior estimates to posterior probabilities each time new information becomes available. A Bayesian Markov Geostatistical Model (BayMar) software was developed for implementation of the methodology in two and three dimensions. This paper gives (1) a theoretical description of the Bayesian Markov Geostatistical Model; (2) a short description of the BayMar software; and (3) an example of application of the model for estimating the suitability for repository establishment with respect to the three parameters of lithology, hydraulic conductivity, and rock quality designation index (RQD) at 400--500 meters below ground surface in an area around the Aespoe Hard Rock Laboratory in southeastern Sweden

  15. The Cost Effectiveness of Psychological and Pharmacological Interventions for Social Anxiety Disorder: A Model-Based Economic Analysis.

    Directory of Open Access Journals (Sweden)

    Ifigeneia Mavranezouli

    Full Text Available Social anxiety disorder is one of the most persistent and common anxiety disorders. Individually delivered psychological therapies are the most effective treatment options for adults with social anxiety disorder, but they are associated with high intervention costs. Therefore, the objective of this study was to assess the relative cost effectiveness of a variety of psychological and pharmacological interventions for adults with social anxiety disorder.A decision-analytic model was constructed to compare costs and quality adjusted life years (QALYs of 28 interventions for social anxiety disorder from the perspective of the British National Health Service and personal social services. Efficacy data were derived from a systematic review and network meta-analysis. Other model input parameters were based on published literature and national sources, supplemented by expert opinion.Individual cognitive therapy was the most cost-effective intervention for adults with social anxiety disorder, followed by generic individual cognitive behavioural therapy (CBT, phenelzine and book-based self-help without support. Other drugs, group-based psychological interventions and other individually delivered psychological interventions were less cost-effective. Results were influenced by limited evidence suggesting superiority of psychological interventions over drugs in retaining long-term effects. The analysis did not take into account side effects of drugs.Various forms of individually delivered CBT appear to be the most cost-effective options for the treatment of adults with social anxiety disorder. Consideration of side effects of drugs would only strengthen this conclusion, as it would improve even further the cost effectiveness of individually delivered CBT relative to phenelzine, which was the next most cost-effective option, due to the serious side effects associated with phenelzine. Further research needs to determine more accurately the long

  16. Primary therapist model for patients referred for rheumatoid arthritis rehabilitation: a cost-effectiveness analysis.

    Science.gov (United States)

    Li, Linda C; Maetzel, Andreas; Davis, Aileen M; Lineker, Sydney C; Bombardier, Claire; Coyte, Peter C

    2006-06-15

    To estimate the incremental cost-effectiveness (ICE) of services from a primary therapist compared with traditional physical therapists and/or occupational therapists for managing rheumatoid arthritis (RA), from the societal perspective. Patients with RA were randomly assigned to the primary therapist model (PTM) or traditional treatment model (TTM) for approximately 6 weeks of rehabilitation treatment. Health outcomes were expressed in terms of quality-adjusted life years (QALYs), measured with the EuroQol instrument at baseline, 6 weeks, and 6 months. Direct and indirect costs, including visits to health professionals, use of investigative tests, hospital visits, use of medications, purchases of adaptive aids, and productivity losses incurred by patients and their caregivers, were collected monthly. Of 144 consenting patients, 111 remained in the study after the baseline assessment: 63 PTM (87.3% women, mean age 54.2 years, disease duration 10.6 years) and 48 TTM (79.2% women, mean age 56.8 years, disease duration 13.2 years). From a societal perspective, PTM generated higher QALYs (mean +/- SD 0.068 +/- 0.22) and resulted in a higher mean cost ($6,848 Canadian, interquartile range [IQR] $1,984-$9,320) compared with TTM (mean +/- SD QALY -0.017 +/- 0.24; mean costs $6,266, IQR $1,938-$10,194) in 6 months, although differences were not statistically significant. The estimated ICE ratio was $13,700 per QALY gained (95% nonparametric confidence interval -$73,500, $230,000). The PTM has potential to be an alternative to traditional physical/occupational therapy, although it is premature to recommend widespread use of this model in other regions. Further research should focus on strategies to reduce costs of the model and assess the long-term economic consequences in managing RA and other rheumatologic conditions.

  17. Bayesian Dimensionality Assessment for the Multidimensional Nominal Response Model

    Directory of Open Access Journals (Sweden)

    Javier Revuelta

    2017-06-01

    Full Text Available This article introduces Bayesian estimation and evaluation procedures for the multidimensional nominal response model. The utility of this model is to perform a nominal factor analysis of items that consist of a finite number of unordered response categories. The key aspect of the model, in comparison with traditional factorial model, is that there is a slope for each response category on the latent dimensions, instead of having slopes associated to the items. The extended parameterization of the multidimensional nominal response model requires large samples for estimation. When sample size is of a moderate or small size, some of these parameters may be weakly empirically identifiable and the estimation algorithm may run into difficulties. We propose a Bayesian MCMC inferential algorithm to estimate the parameters and the number of dimensions underlying the multidimensional nominal response model. Two Bayesian approaches to model evaluation were compared: discrepancy statistics (DIC, WAICC, and LOO that provide an indication of the relative merit of different models, and the standardized generalized discrepancy measure that requires resampling data and is computationally more involved. A simulation study was conducted to compare these two approaches, and the results show that the standardized generalized discrepancy measure can be used to reliably estimate the dimensionality of the model whereas the discrepancy statistics are questionable. The paper also includes an example with real data in the context of learning styles, in which the model is used to conduct an exploratory factor analysis of nominal data.

  18. The Brazilian Unified National Health System: Proposal of a Cost-effectiveness Evaluation Model

    Directory of Open Access Journals (Sweden)

    Lilian Ribeiro de Oliveira

    2016-04-01

    Full Text Available The Brazilian Unified National Health System (Sistema Único de Saúde [SUS] is in a prominent position compared to the existing social policies. One of the new tools used by SUS is known as Performance Index of the Unified Health System (Índice de Desempenho do Sistema Único de Saúde [IDSUS], which is intended to measure the performance of each municipality. Therefore, the aim of this study was to propose a model of cost-effectiveness to compare IDSUS performance against total revenues achieved in Homogeneous Group 2, consisting of 94 municipalities and analysed using data from IDSUS and the System Information of the Public Budget for Health Care (Sistema de Informação do Orçamento Público em Saúde [SIOPS] for the year 2011. After structuring this data, we carried out descriptive statistical and cluster analysis in order to group similar municipalities in accordance with established variables: IDSUS performance, population and total revenue in health per capita. Even with the division of municipalities into homogeneous groups and after using variables such as population and revenue to regroup them, the results showed there are municipalities with heterogeneous characteristics. Another finding is in the use and intersection of two distinct databases (IDSUS and SIOPS, which allowed for visualizing the impact of health care revenue on the municipalities performance.

  19. An improved cost-effective, reproducible method for evaluation of bone loss in a rodent model.

    Science.gov (United States)

    Fine, Daniel H; Schreiner, Helen; Nasri-Heir, Cibele; Greenberg, Barbara; Jiang, Shuying; Markowitz, Kenneth; Furgang, David

    2009-02-01

    This study was designed to investigate the utility of two "new" definitions for assessment of bone loss in a rodent model of periodontitis. Eighteen rats were divided into three groups. Group 1 was infected by Aggregatibacter actinomycetemcomitans (Aa), group 2 was infected with an Aa leukotoxin knock-out, and group 3 received no Aa (controls). Microbial sampling and antibody titres were determined. Initially, two examiners measured the distance from the cemento-enamel-junction to alveolar bone crest using the three following methods; (1) total area of bone loss by radiograph, (2) linear bone loss by radiograph, (3) a direct visual measurement (DVM) of horizontal bone loss. Two "new" definitions were adopted; (1) any site in infected animals showing bone loss >2 standard deviations above the mean seen at that site in control animals was recorded as bone loss, (2) any animal with two or more sites in any quadrant affected by bone loss was considered as diseased. Using the "new" definitions both evaluators independently found that infected animals had significantly more disease than controls (DVM system; p<0.05). The DVM method provides a simple, cost effective, and reproducible method for studying periodontal disease in rodents.

  20. Modeling the Cost Effectiveness of Neuroimaging-Based Treatment of Acute Wake-Up Stroke.

    Directory of Open Access Journals (Sweden)

    Ankur Pandya

    Full Text Available Thrombolytic treatment (tissue-type plasminogen activator [tPA] is only recommended for acute ischemic stroke patients with stroke onset time 4.5 hours, 46.3% experienced a good stroke outcome. Lifetime discounted QALYs and costs were 5.312 and $88,247 for the no treatment strategy and 5.342 and $90,869 for the MRI-based strategy, resulting in an ICER of $88,000/QALY. Results were sensitive to variations in patient- and provider-specific factors such as sleep duration, hospital travel and door-to-needle times, as well as onset probability distribution, MRI specificity, and mRS utility values.Our model-based findings suggest that an MRI-based treatment strategy for this population could be cost-effective and quantifies the impact that patient- and provider-specific factors, such as sleep duration, hospital travel and door-to-needle times, could have on the optimal decision for wake-up stroke patients.

  1. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    Science.gov (United States)

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  2. Characterizing economic trends by Bayesian stochastic model specifi cation search

    OpenAIRE

    Grassi, Stefano; Proietti, Tommaso

    2010-01-01

    We apply a recently proposed Bayesian model selection technique, known as stochastic model specification search, for characterising the nature of the trend in macroeconomic time series. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends. In particular, we formulate autoregressive models with stochastic trends components and decide on whether a specific feature of the series, i.e. the underlying level and/or the rate...

  3. Copula Based Factorization in Bayesian Multivariate Infinite Mixture Models

    OpenAIRE

    Martin Burda; Artem Prokhorov

    2012-01-01

    Bayesian nonparametric models based on infinite mixtures of density kernels have been recently gaining in popularity due to their flexibility and feasibility of implementation even in complicated modeling scenarios. In economics, they have been particularly useful in estimating nonparametric distributions of latent variables. However, these models have been rarely applied in more than one dimension. Indeed, the multivariate case suffers from the curse of dimensionality, with a rapidly increas...

  4. A Bayesian ensemble of sensitivity measures for severe accident modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoseyni, Seyed Mohsen [Department of Basic Sciences, East Tehran Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of); Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Vagnoli, Matteo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge, Fondation EDF – Electricite de France Ecole Centrale, Paris, and Supelec, Paris (France); Pourgol-Mohammad, Mohammad [Department of Mechanical Engineering, Sahand University of Technology, Tabriz (Iran, Islamic Republic of)

    2015-12-15

    Highlights: • We propose a sensitivity analysis (SA) method based on a Bayesian updating scheme. • The Bayesian updating schemes adjourns an ensemble of sensitivity measures. • Bootstrap replicates of a severe accident code output are fed to the Bayesian scheme. • The MELCOR code simulates the fission products release of LOFT LP-FP-2 experiment. • Results are compared with those of traditional SA methods. - Abstract: In this work, a sensitivity analysis framework is presented to identify the relevant input variables of a severe accident code, based on an incremental Bayesian ensemble updating method. The proposed methodology entails: (i) the propagation of the uncertainty in the input variables through the severe accident code; (ii) the collection of bootstrap replicates of the input and output of limited number of simulations for building a set of finite mixture models (FMMs) for approximating the probability density function (pdf) of the severe accident code output of the replicates; (iii) for each FMM, the calculation of an ensemble of sensitivity measures (i.e., input saliency, Hellinger distance and Kullback–Leibler divergence) and the updating when a new piece of evidence arrives, by a Bayesian scheme, based on the Bradley–Terry model for ranking the most relevant input model variables. An application is given with respect to a limited number of simulations of a MELCOR severe accident model describing the fission products release in the LP-FP-2 experiment of the loss of fluid test (LOFT) facility, which is a scaled-down facility of a pressurized water reactor (PWR).

  5. Cost-effectiveness of total hip and knee replacements for the Australian population with osteoarthritis: discrete-event simulation model.

    Directory of Open Access Journals (Sweden)

    Hideki Higashi

    Full Text Available BACKGROUND: Osteoarthritis constitutes a major musculoskeletal burden for the aged Australians. Hip and knee replacement surgeries are effective interventions once all conservative therapies to manage the symptoms have been exhausted. This study aims to evaluate the cost-effectiveness of hip and knee replacements in Australia. To our best knowledge, the study is the first attempt to account for the dual nature of hip and knee osteoarthritis in modelling the severities of right and left joints separately. METHODOLOGY/PRINCIPAL FINDINGS: We developed a discrete-event simulation model that follows up the individuals with osteoarthritis over their lifetimes. The model defines separate attributes for right and left joints and accounts for several repeat replacements. The Australian population with osteoarthritis who were 40 years of age or older in 2003 were followed up until extinct. Intervention effects were modelled by means of disability-adjusted life-years (DALYs averted. Both hip and knee replacements are highly cost effective (AUD 5,000 per DALY and AUD 12,000 per DALY respectively under an AUD 50,000/DALY threshold level. The exclusion of cost offsets, and inclusion of future unrelated health care costs in extended years of life, did not change the findings that the interventions are cost-effective (AUD 17,000 per DALY and AUD 26,000 per DALY respectively. However, there was a substantial difference between hip and knee replacements where surgeries administered for hips were more cost-effective than for knees. CONCLUSIONS/SIGNIFICANCE: Both hip and knee replacements are cost-effective interventions to improve the quality of life of people with osteoarthritis. It was also shown that the dual nature of hip and knee OA should be taken into account to provide more accurate estimation on the cost-effectiveness of hip and knee replacements.

  6. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  7. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  8. Bayesian non parametric modelling of Higgs pair production

    Directory of Open Access Journals (Sweden)

    Scarpa Bruno

    2017-01-01

    Full Text Available Statistical classification models are commonly used to separate a signal from a background. In this talk we face the problem of isolating the signal of Higgs pair production using the decay channel in which each boson decays into a pair of b-quarks. Typically in this context non parametric methods are used, such as Random Forests or different types of boosting tools. We remain in the same non-parametric framework, but we propose to face the problem following a Bayesian approach. A Dirichlet process is used as prior for the random effects in a logit model which is fitted by leveraging the Polya-Gamma data augmentation. Refinements of the model include the insertion in the simple model of P-splines to relate explanatory variables with the response and the use of Bayesian trees (BART to describe the atoms in the Dirichlet process.

  9. A hospital-level cost-effectiveness analysis model for toxigenic Clostridium difficile detection algorithms.

    Science.gov (United States)

    Verhoye, E; Vandecandelaere, P; De Beenhouwer, H; Coppens, G; Cartuyvels, R; Van den Abeele, A; Frans, J; Laffut, W

    2015-10-01

    Despite thorough analyses of the analytical performance of Clostridium difficile tests and test algorithms, the financial impact at hospital level has not been well described. Such a model should take institution-specific variables into account, such as incidence, request behaviour and infection control policies. To calculate the total hospital costs of different test algorithms, accounting for days on which infected patients with toxigenic strains were not isolated and therefore posed an infectious risk for new/secondary nosocomial infections. A mathematical algorithm was developed to gather the above parameters using data from seven Flemish hospital laboratories (Bilulu Microbiology Study Group) (number of tests, local prevalence and hospital hygiene measures). Measures of sensitivity and specificity for the evaluated tests were taken from the literature. List prices and costs of assays were provided by the manufacturer or the institutions. The calculated cost included reagent costs, personnel costs and the financial burden following due and undue isolations and antibiotic therapies. Five different test algorithms were compared. A dynamic calculation model was constructed to evaluate the cost:benefit ratio of each algorithm for a set of institution- and time-dependent inputted variables (prevalence, cost fluctuations and test performances), making it possible to choose the most advantageous algorithm for its setting. A two-step test algorithm with concomitant glutamate dehydrogenase and toxin testing, followed by a rapid molecular assay was found to be the most cost-effective algorithm. This enabled resolution of almost all cases on the day of arrival, minimizing the number of unnecessary or missing isolations. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  10. Modelling cheetah relocation success in southern Africa using an iterative Bayesian network development cycle

    CSIR Research Space (South Africa)

    Johnson, S

    2010-02-01

    Full Text Available metapopulations was the focus of a Bayesian Network (BN) modelling workshop in South Africa. Using a new heuristics, Iterative Bayesian Network Development Cycle (IBNDC), described in this paper, several networks were formulated to distinguish between the unique...

  11. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Science.gov (United States)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  12. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    Science.gov (United States)

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Revision Arthroscopic Repair Versus Latarjet Procedure in Patients With Recurrent Instability After Initial Repair Attempt: A Cost-Effectiveness Model.

    Science.gov (United States)

    Makhni, Eric C; Lamba, Nayan; Swart, Eric; Steinhaus, Michael E; Ahmad, Christopher S; Romeo, Anthony A; Verma, Nikhil N

    2016-09-01

    To compare the cost-effectiveness of arthroscopic revision instability repair and Latarjet procedure in treating patients with recurrent instability after initial arthroscopic instability repair. An expected-value decision analysis of revision arthroscopic instability repair compared with Latarjet procedure for recurrent instability followed by failed repair attempt was modeled. Inputs regarding procedure cost, clinical outcomes, and health utilities were derived from the literature. Compared with revision arthroscopic repair, Latarjet was less expensive ($13,672 v $15,287) with improved clinical outcomes (43.78 v 36.76 quality-adjusted life-years). Both arthroscopic repair and Latarjet were cost-effective compared with nonoperative treatment (incremental cost-effectiveness ratios of 3,082 and 1,141, respectively). Results from sensitivity analyses indicate that under scenarios of high rates of stability postoperatively, along with improved clinical outcome scores, revision arthroscopic repair becomes increasingly cost-effective. Latarjet procedure for failed instability repair is a cost-effective treatment option, with lower costs and improved clinical outcomes compared with revision arthroscopic instability repair. However, surgeons must still incorporate clinical judgment into treatment algorithm formation. Level IV, expected value decision analysis. Copyright © 2016. Published by Elsevier Inc.

  14. Projecting UK mortality using Bayesian generalised additive models

    OpenAIRE

    Hilton, Jason; Dodd, Erengul; Forster, Jonathan; Smith, Peter W.F.

    2018-01-01

    Forecasts of mortality provide vital information about future populations, with implications for pension and health-care policy as well as for decisions made by private companies about life insurance and annuity pricing. This paper presents a Bayesian approach to the forecasting of mortality that jointly estimates a Generalised Additive Model (GAM) for mortality for the majority of the age-range and a parametric model for older ages where the data are sparser. The GAM allows smooth components...

  15. Bayesian modeling and prediction of solar particles flux

    International Nuclear Information System (INIS)

    Dedecius, Kamil; Kalova, Jana

    2010-01-01

    An autoregression model was developed based on the Bayesian approach. Considering the solar wind non-homogeneity, the idea was applied of combining the pure autoregressive properties of the model with expert knowledge based on a similar behaviour of the various phenomena related to the flux properties. Examples of such situations include the hardening of the X-ray spectrum, which is often followed by coronal mass ejection and a significant increase in the particles flux intensity

  16. Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease (COPD) Using an Ontario Policy Model

    Science.gov (United States)

    Chandra, K; Blackhouse, G; McCurdy, BR; Bornstein, M; Campbell, K; Costa, V; Franek, J; Kaulback, K; Levin, L; Sehatzadeh, S; Sikich, N; Thabane, M; Goeree, R

    2012-01-01

    Pulmonary Disease (COPD): An Evidence-Based Analysis Pulmonary Rehabilitation for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Long-Term Oxygen Therapy for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Noninvasive Positive Pressure Ventilation for Acute Respiratory Failure Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Noninvasive Positive Pressure Ventilation for Chronic Respiratory Failure Patients With Stable Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Hospital-at-Home Programs for Patients With Acute Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Home Telehealth for Patients With Chronic Obstructive Pulmonary Disease (COPD): An Evidence-Based Analysis Cost-Effectiveness of Interventions for Chronic Obstructive Pulmonary Disease Using an Ontario Policy Model Experiences of Living and Dying With COPD: A Systematic Review and Synthesis of the Qualitative Empirical Literature For more information on the qualitative review, please contact Mita Giacomini at: http://fhs.mcmaster.ca/ceb/faculty_member_giacomini.htm. For more information on the economic analysis, please visit the PATH website: http://www.path-hta.ca/About-Us/Contact-Us.aspx. The Toronto Health Economics and Technology Assessment (THETA) collaborative has produced an associated report on patient preference for mechanical ventilation. For more information, please visit the THETA website: http://theta.utoronto.ca/static/contact. Background Chronic obstructive pulmonary disease (COPD) is characterized by chronic inflammation throughout the airways, parenchyma, and pulmonary vasculature. The inflammation causes repeated cycles of injury and repair in the airway wall— inflammatory cells release a variety of chemicals and lead to cellular damage. The inflammation process also contributes to the loss of elastic recoil pressure

  17. The Albufera Initiative for Biodiversity: a cost effective model for integrating science and volunteer participation in coastal protected area management

    NARCIS (Netherlands)

    Riddiford, N.J.; Veraart, J.A.; Férriz, I.; Owens, N.W.; Royo, L.; Honey, M.R.

    2014-01-01

    This paper puts forward a multi-disciplinary field project, set up in 1989 at the Parc Natural de s’Albufera in Mallorca, Balearic Islands, Spain, as an example of a cost effective model for integrating science and volunteer participation in a coastal protected area. Outcomes include the provision

  18. Cost-effectiveness model of long-acting risperidone in schizophrenia in the US.

    Science.gov (United States)

    Edwards, Natalie C; Rupnow, Marcia F T; Pashos, Chris L; Botteman, Marc F; Diamond, Ronald J

    2005-01-01

    Schizophrenia is a devastating and costly illness that affects 1% of the population in the US. Effective pharmacological therapies are available but suboptimal patient adherence to either acute or long-term therapeutic regimens reduces their effectiveness. The availability of a long-acting injection (LAI) formulation of risperidone may increase adherence and improve clinical and economic outcomes for people with schizophrenia. To assess the cost effectiveness of risperidone LAI compared with oral risperidone, oral olanzapine and haloperidol decanoate LAI over a 1-year time period in outpatients with schizophrenia who had previously suffered a relapse requiring hospitalisation. US healthcare system. Published medical literature, unpublished data from clinical trials and a consumer health database, and a clinical expert panel were used to populate a decision-analysis model comparing the four treatment alternatives. The model captured: rates of patient compliance; rates, frequency and duration of relapse; incidence of adverse events (bodyweight gain and extrapyramidal effects); and healthcare resource utilisation and associated costs. Primary outcomes were: the proportion of patients with relapse; the frequency of relapse per patient; the number of relapse days per patient; and total direct medical cost per patient per year. Costs are in year 2002 US dollars. Based on model projections, the proportions of patients experiencing a relapse requiring hospitalisation after 1 year of treatment were 66% for haloperidol decanoate LAI, 41% for oral risperidone and oral olanzapine and 26% for risperidone LAI, while the proportion of patients with a relapse not requiring hospitalisation were 60%, 37%, 37% and 24%, respectively. The mean number of days of relapse requiring hospitalisation per patient per year was 28 for haloperidol decanoate LAI, 18 for oral risperidone and oral olanzapine and 11 for risperidone LAI, while the mean number of days of relapse not requiring

  19. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  20. Systematic screening for Chlamydia trachomatis : Estimating cost-effectiveness using dynamic modeling and Dutch data

    NARCIS (Netherlands)

    de Vries, R.; Van Bergen, J.E.A.M.; de Jong-van den Berg, Lolkje; Postma, Maarten

    2006-01-01

    To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.

  1. Systematic screening for Chlamydia trachomatis: estimating cost-effectiveness using dynamic modeling and Dutch data

    NARCIS (Netherlands)

    de Vries, Robin; van Bergen, Jan E. A. M.; de Jong-van den Berg, Lolkje T. W.; Postma, Maarten J.

    2006-01-01

    To estimate the cost-effectiveness of a systematic one-off Chlamydia trachomatis (CT) screening program including partner treatment for Dutch young adults. Data on infection prevalence, participation rates, and sexual behavior were obtained from a large pilot study conducted in The Netherlands.

  2. Cost-effectiveness of primary prevention of paediatric asthma: a decision-analytic model

    NARCIS (Netherlands)

    Ramos, G. Feljandro P.; van Asselt, Antoinette D. I.; Kuiper, Sandra; Severens, Johan L.; Maas, Tanja; Dompeling, Edward; Knottnerus, J. André; van Schayck, Onno C. P.

    2013-01-01

    Background: Many children stand to benefit from being asthma-free for life with primary (i.e., prenatally started) prevention addressing one environmental exposure in a unifaceted (UF) approach or at least two in a multifaceted (MF) approach. We assessed the cost-effectiveness of primary prevention

  3. From intermediate to final behavioral endpoints : Modeling cognitions in (cost-)effectiveness analyses in health promotion

    NARCIS (Netherlands)

    Prenger, Hendrikje Cornelia

    2012-01-01

    Cost-effectiveness analyses (CEAs) are considered an increasingly important tool in health promotion and psychology. In health promotion adequate effectiveness data of innovative interventions are often lacking. In case of many promising interventions the available data are inadequate for CEAs due

  4. Cost-effectiveness of a multidisciplinary intervention model for community-dwelling frail older people.

    NARCIS (Netherlands)

    Melis, R.J.F.; Adang, E.M.M.; Teerenstra, S.; Eijken, M.I.J. van; Wimo, A.; Achterberg, T. van; Lisdonk, E.H. van de; Olde Rikkert, M.G.M.

    2008-01-01

    BACKGROUND: There is growing interest in geriatric care for community-dwelling older people. There are, however, relatively few reports on the economics of this type of care. This article reports about the cost-effectiveness of the Dutch Geriatric Intervention Program (DGIP) compared to usual care

  5. Cost-effectiveness of face-to-face smoking cessation interventions: A dynamic modeling study

    NARCIS (Netherlands)

    T.L. Feenstra (Talitha); H.H. Hamberg-Van Reenen (Heleen); R.T. Hoogenveen (Rudolf); M.P.M.H. Rutten-van Mölken (Maureen)

    2005-01-01

    textabstractObjectives: To estimate the cost-effectiveness of five face-to-face smoking cessation interventions (i.e., minimal counseling by a general practitioner (GP) with, or without nicotine replacement therapy (NRT), intensive counseling with NRT, or bupropion, and telephone counseling) in

  6. Impact of censoring on learning Bayesian networks in survival modelling.

    Science.gov (United States)

    Stajduhar, Ivan; Dalbelo-Basić, Bojana; Bogunović, Nikola

    2009-11-01

    Bayesian networks are commonly used for presenting uncertainty and covariate interactions in an easily interpretable way. Because of their efficient inference and ability to represent causal relationships, they are an excellent choice for medical decision support systems in diagnosis, treatment, and prognosis. Although good procedures for learning Bayesian networks from data have been defined, their performance in learning from censored survival data has not been widely studied. In this paper, we explore how to use these procedures to learn about possible interactions between prognostic factors and their influence on the variate of interest. We study how censoring affects the probability of learning correct Bayesian network structures. Additionally, we analyse the potential usefulness of the learnt models for predicting the time-independent probability of an event of interest. We analysed the influence of censoring with a simulation on synthetic data sampled from randomly generated Bayesian networks. We used two well-known methods for learning Bayesian networks from data: a constraint-based method and a score-based method. We compared the performance of each method under different levels of censoring to those of the naive Bayes classifier and the proportional hazards model. We did additional experiments on several datasets from real-world medical domains. The machine-learning methods treated censored cases in the data as event-free. We report and compare results for several commonly used model evaluation metrics. On average, the proportional hazards method outperformed other methods in most censoring setups. As part of the simulation study, we also analysed structural similarities of the learnt networks. Heavy censoring, as opposed to no censoring, produces up to a 5% surplus and up to 10% missing total arcs. It also produces up to 50% missing arcs that should originally be connected to the variate of interest. Presented methods for learning Bayesian networks from

  7. Comparison of two dose and three dose human papillomavirus vaccine schedules: cost effectiveness analysis based on transmission model.

    Science.gov (United States)

    Jit, Mark; Brisson, Marc; Laprise, Jean-François; Choi, Yoon Hong

    2015-01-06

    To investigate the incremental cost effectiveness of two dose human papillomavirus vaccination and of additionally giving a third dose. Cost effectiveness study based on a transmission dynamic model of human papillomavirus vaccination. Two dose schedules for bivalent or quadrivalent human papillomavirus vaccines were assumed to provide 10, 20, or 30 years' vaccine type protection and cross protection or lifelong vaccine type protection without cross protection. Three dose schedules were assumed to give lifelong vaccine type and cross protection. United Kingdom. Males and females aged 12-74 years. No, two, or three doses of human papillomavirus vaccine given routinely to 12 year old girls, with an initial catch-up campaign to 18 years. Costs (from the healthcare provider's perspective), health related utilities, and incremental cost effectiveness ratios. Giving at least two doses of vaccine seems to be highly cost effective across the entire range of scenarios considered at the quadrivalent vaccine list price of £86.50 (€109.23; $136.00) per dose. If two doses give only 10 years' protection but adding a third dose extends this to lifetime protection, then the third dose also seems to be cost effective at £86.50 per dose (median incremental cost effectiveness ratio £17,000, interquartile range £11,700-£25,800). If two doses protect for more than 20 years, then the third dose will have to be priced substantially lower (median threshold price £31, interquartile range £28-£35) to be cost effective. Results are similar for a bivalent vaccine priced at £80.50 per dose and when the same scenarios are explored by parameterising a Canadian model (HPV-ADVISE) with economic data from the United Kingdom. Two dose human papillomavirus vaccine schedules are likely to be the most cost effective option provided protection lasts for at least 20 years. As the precise duration of two dose schedules may not be known for decades, cohorts given two doses should be closely

  8. Bayesian Inference of High-Dimensional Dynamical Ocean Models

    Science.gov (United States)

    Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.

    2015-12-01

    This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.

  9. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  10. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  11. Comparison of Strategies and Incidence Thresholds for Vi Conjugate Vaccines Against Typhoid Fever: A Cost-effectiveness Modeling Study.

    Science.gov (United States)

    Lo, Nathan C; Gupta, Ribhav; Stanaway, Jeffrey D; Garrett, Denise O; Bogoch, Isaac I; Luby, Stephen P; Andrews, Jason R

    2018-02-12

    Typhoid fever remains a major public health problem globally. While new Vi conjugate vaccines hold promise for averting disease, the optimal programmatic delivery remains unclear. We aimed to identify the strategies and associated epidemiologic conditions under which Vi conjugate vaccines would be cost-effective. We developed a dynamic, age-structured transmission and cost-effectiveness model that simulated multiple vaccination strategies with a typhoid Vi conjugate vaccine from a societal perspective. We simulated 10-year vaccination programs with (1) routine immunization of infants (aged typhoid fever and defined strategies as highly cost-effective by using the definition of a low-income country (defined as a country with a gross domestic product of $1045 per capita). We defined incidence as the true number of clinically symptomatic people in the population per year. Vi conjugate typhoid vaccines were highly cost-effective when administered by routine immunization activities through the EPI in settings with an annual incidence of >50 cases/100000 (95% uncertainty interval, 40-75 cases) and when administered through the EPI plus a catch-up campaign in settings with an annual incidence of >130 cases/100000 (95% uncertainty interval, 50-395 cases). The incidence threshold was sensitive to the typhoid-related case-fatality rate, carrier contribution to transmission, vaccine characteristics, and country-specific economic threshold for cost-effectiveness. Typhoid Vi conjugate vaccines would be highly cost-effective in low-income countries in settings of moderate typhoid incidence (50 cases/100000 annually). These results were sensitive to case-fatality rates, underscoring the need to consider factors contributing to typhoid mortality (eg, healthcare access and antimicrobial resistance) in the global vaccination strategy. © The Author(s) 2018. Published by Oxford University Press for the Infectious Diseases Society of America.

  12. Modeling the cost-effectiveness of ilaprazole versus omeprazole for the treatment of newly diagnosed duodenal ulcer patients in China.

    Science.gov (United States)

    Xuan, J W; Song, R L; Xu, G X; Lu, W Q; Lu, Y J; Liu, Z

    2016-11-01

    To evaluate the cost-effectiveness of 10 mg ilaprazole once-daily vs 20 mg omeprazole once-daily to treat newly-diagnosed duodenal ulcer patients in China. A decision tree model was constructed and the treatment impact was projected up to 1 year. The CYP2C19 polymorphism distribution in the Chinese population, the respective cure rates in the CYP2C19 genotype sub-groups, the impact of Duodenal Ulcer (DU) on utility value and drug-related side-effect data were obtained from the literature. The total costs of medications were calculated to estimate the treatment costs based on current drug retail prices in China. Expert surveys were conducted when published data were not available. Probabilistic sensitivity analysis was performed to gauge the robustness of the results. Ilaprazole, when compared with omeprazole, achieved a better overall clinical efficacy. For the overall population, ilaprazole achieved an incremental cost effectiveness ratio (ICER) of ¥132 056 per QALY gained. This is less than the WHO recommended threshold of 3-times the average GDP per capita in China (2014). Furthermore, sub-group analysis showed that ilaprazole is cost-effective in every province in CYP2C19 hetEM patients and in the most developed provinces in CYP2C19 homEM patients. Probabilistic sensitivity analysis suggests that the results are robust with 97% probability that ilaprozole is considered cost-effective when a threshold of 3-times China's average GDP per capita is considered. This study didn't have the data of ilaprazole combined with Hp eradication therapy. Caution should be taken when extrapolating these findings to DU patients with an Hp eradication therapy. The cost-effectiveness analysis results demonstrated that ilaprazole would be considered a cost-effective therapy, compared with omeprazole, in Chinese DU patients based on the efficacy projections in various CYP2C19 polymorphism types.

  13. Bayesian Option Pricing using Mixed Normal Heteroskedasticity Models

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars

    2014-01-01

    Option pricing using mixed normal heteroscedasticity models is considered. It is explained how to perform inference and price options in a Bayesian framework. The approach allows to easily compute risk neutral predictive price densities which take into account parameter uncertainty....... In an application to the S&P 500 index, classical and Bayesian inference is performed on the mixture model using the available return data. Comparing the ML estimates and posterior moments small differences are found. When pricing a rich sample of options on the index, both methods yield similar pricing errors...... measured in dollar and implied standard deviation losses, and it turns out that the impact of parameter uncertainty is minor. Therefore, when it comes to option pricing where large amounts of data are available, the choice of the inference method is unimportant. The results are robust to different...

  14. Operational modal analysis modeling, Bayesian inference, uncertainty laws

    CERN Document Server

    Au, Siu-Kui

    2017-01-01

    This book presents operational modal analysis (OMA), employing a coherent and comprehensive Bayesian framework for modal identification and covering stochastic modeling, theoretical formulations, computational algorithms, and practical applications. Mathematical similarities and philosophical differences between Bayesian and classical statistical approaches to system identification are discussed, allowing their mathematical tools to be shared and their results correctly interpreted. Many chapters can be used as lecture notes for the general topic they cover beyond the OMA context. After an introductory chapter (1), Chapters 2–7 present the general theory of stochastic modeling and analysis of ambient vibrations. Readers are first introduced to the spectral analysis of deterministic time series (2) and structural dynamics (3), which do not require the use of probability concepts. The concepts and techniques in these chapters are subsequently extended to a probabilistic context in Chapter 4 (on stochastic pro...

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    2000-01-01

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....

  17. Nonparametric Bayesian models through probit stick-breaking processes.

    Science.gov (United States)

    Rodríguez, Abel; Dunson, David B

    2011-03-01

    We describe a novel class of Bayesian nonparametric priors based on stick-breaking constructions where the weights of the process are constructed as probit transformations of normal random variables. We show that these priors are extremely flexible, allowing us to generate a great variety of models while preserving computational simplicity. Particular emphasis is placed on the construction of rich temporal and spatial processes, which are applied to two problems in finance and ecology.

  18. A Bayesian Model of the Memory Colour Effect

    OpenAIRE

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration....

  19. A Bayesian model of the memory colour effect.

    OpenAIRE

    Witzel, Christoph; Olkkonen, Maria; Gegenfurtner, Karl R.

    2018-01-01

    According to the memory colour effect, the colour of a colour-diagnostic object is not perceived independently of the object itself. Instead, it has been shown through an achromatic adjustment method that colour-diagnostic objects still appear slightly in their typical colour, even when they are colourimetrically grey. Bayesian models provide a promising approach to capture the effect of prior knowledge on colour perception and to link these effects to more general effects of cue integration....

  20. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    Science.gov (United States)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  1. Cost-effectiveness of HIV and syphilis antenatal screening: a modelling study.

    Science.gov (United States)

    Bristow, Claire C; Larson, Elysia; Anderson, Laura J; Klausner, Jeffrey D

    2016-08-01

    The WHO called for the elimination of maternal-to-child transmission (MTCT) of HIV and syphilis, a harmonised approach for the improvement of health outcomes for mothers and children. Testing early in pregnancy, treating seropositive pregnant women and preventing syphilis reinfection can prevent MTCT of HIV and syphilis. We assessed the health and economic outcomes of a dual testing strategy in a simulated cohort of 100 000 antenatal care patients in Malawi. We compared four screening algorithms: (1) HIV rapid test only, (2) dual HIV and syphilis rapid tests, (3) single rapid tests for HIV and syphilis and (4) HIV rapid and syphilis laboratory tests. We calculated the expected number of adverse pregnancy outcomes, the expected costs and the expected newborn disability-adjusted life years (DALYs) for each screening algorithm. The estimated costs and DALYs for each screening algorithm were assessed from a societal perspective using Markov progression models. Additionally, we conducted a Monte Carlo multiway sensitivity analysis, allowing for ranges of inputs. Our cohort decision model predicted the lowest number of adverse pregnancy outcomes in the dual HIV and syphilis rapid test strategy. Additionally, from the societal perspective, the costs of prevention and care using a dual HIV and syphilis rapid testing strategy was both the least costly ($226.92 per pregnancy) and resulted in the fewest DALYs (116 639) per 100 000 pregnancies. In the Monte Carlo simulation the dual HIV and syphilis algorithm was always cost saving and almost always reduced DALYs compared with HIV testing alone. The results of the cost-effectiveness analysis showed that a dual HIV and syphilis test was cost saving compared with all other screening strategies. Updating existing prevention of mother-to-child HIV transmission programmes in Malawi and similar countries to include dual rapid testing for HIV and syphilis is likely to be advantageous. Published by the BMJ Publishing Group

  2. Population cost-effectiveness of the Triple P parenting programme for the treatment of conduct disorder: an economic modelling study.

    Science.gov (United States)

    Sampaio, Filipa; Barendregt, Jan J; Feldman, Inna; Lee, Yong Yi; Sawyer, Michael G; Dadds, Mark R; Scott, James G; Mihalopoulos, Cathrine

    2017-12-29

    Parenting programmes are the recommended treatments of conduct disorders (CD) in children, but little is known about their longer term cost-effectiveness. This study aimed to evaluate the population cost-effectiveness of one of the most researched evidence-based parenting programmes, the Triple P-Positive Parenting Programme, delivered in a group and individual format, for the treatment of CD in children. A population-based multiple cohort decision analytic model was developed to estimate the cost per disability-adjusted life year (DALY) averted of Triple P compared with a 'no intervention' scenario, using a health sector perspective. The model targeted a cohort of 5-9-year-old children with CD in Australia currently seeking treatment, and followed them until they reached adulthood (i.e., 18 years). Multivariate probabilistic and univariate sensitivity analyses were conducted to incorporate uncertainty in the model parameters. Triple P was cost-effective compared to no intervention at a threshold of AU$50,000 per DALY averted when delivered in a group format [incremental cost-effectiveness ratio (ICER) = $1013 per DALY averted; 95% uncertainty interval (UI) 471-1956] and in an individual format (ICER = $20,498 per DALY averted; 95% UI 11,146-39,470). Evidence-based parenting programmes, such as the Triple P, for the treatment of CD among children appear to represent good value for money, when delivered in a group or an individual face-to-face format, with the group format being the most cost-effective option. The current model can be used for economic evaluations of other interventions targeting CD and in other settings.

  3. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  4. A model-based cost-effectiveness analysis of osteoporosis screening and treatment strategy for postmenopausal Japanese women.

    Science.gov (United States)

    Yoshimura, M; Moriwaki, K; Noto, S; Takiguchi, T

    2017-02-01

    Although an osteoporosis screening program has been implemented as a health promotion project in Japan, its cost-effectiveness has yet to be elucidated fully. We performed a cost-effectiveness analysis and found that osteoporosis screening and treatment would be cost-effective for Japanese women over 60 years. The purpose of this study was to estimate the cost-effectiveness of osteoporosis screening and drug therapy in the Japanese healthcare system for postmenopausal women with no history of fracture. A patient-level state transition model was developed to predict the outcomes of Japanese women with no previous fracture. Lifetime costs and quality-adjusted life years (QALYs) were estimated for women who receive osteoporosis screening and alendronate therapy for 5 years and those who do not receive the screening and treatments. The incremental cost-effectiveness ratio (ICER) of the screening option compared with the no screening option was estimated. Sensitivity analyses were performed to examine the influence of parameter uncertainty on the base case results. The ICERs of osteoporosis screening and treatments for Japanese women aged 50-54, 55-59, 60-64, 65-69, 70-74, and 75-79 years were estimated to be $89,242, $64,010, $40,596, $27,697, $17,027, and $9771 per QALY gained, respectively. Deterministic sensitivity analyses showed that several parameters such as the disutility due to vertebral fracture had a significant influence on the base case results. Applying a willingness to pay of $50,000 per QALY gained, the probability that the screening option became cost-effectiveness estimated to 50.9, 56.3, 59.1, and 64.7 % for women aged 60-64, 65-69, 70-74, and 75-79 years, respectively. Scenario analyses showed that the ICER for women aged 55-59 years with at least one clinical risk factor was below $50,000 per QALY. In conclusion, dual energy X-ray absorptiometry (DXA) screening and alendronate therapy for osteoporosis would be cost-effective for

  5. A Bayesian Nonparametric Meta-Analysis Model

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  6. Cost-effective conservation of an endangered frog under uncertainty.

    Science.gov (United States)

    Rose, Lucy E; Heard, Geoffrey W; Chee, Yung En; Wintle, Brendan A

    2016-04-01

    How should managers choose among conservation options when resources are scarce and there is uncertainty regarding the effectiveness of actions? Well-developed tools exist for prioritizing areas for one-time and binary actions (e.g., protect vs. not protect), but methods for prioritizing incremental or ongoing actions (such as habitat creation and maintenance) remain uncommon. We devised an approach that combines metapopulation viability and cost-effectiveness analyses to select among alternative conservation actions while accounting for uncertainty. In our study, cost-effectiveness is the ratio between the benefit of an action and its economic cost, where benefit is the change in metapopulation viability. We applied the approach to the case of the endangered growling grass frog (Litoria raniformis), which is threatened by urban development. We extended a Bayesian model to predict metapopulation viability under 9 urbanization and management scenarios and incorporated the full probability distribution of possible outcomes for each scenario into the cost-effectiveness analysis. This allowed us to discern between cost-effective alternatives that were robust to uncertainty and those with a relatively high risk of failure. We found a relatively high risk of extinction following urbanization if the only action was reservation of core habitat; habitat creation actions performed better than enhancement actions; and cost-effectiveness ranking changed depending on the consideration of uncertainty. Our results suggest that creation and maintenance of wetlands dedicated to L. raniformis is the only cost-effective action likely to result in a sufficiently low risk of extinction. To our knowledge we are the first study to use Bayesian metapopulation viability analysis to explicitly incorporate parametric and demographic uncertainty into a cost-effective evaluation of conservation actions. The approach offers guidance to decision makers aiming to achieve cost-effective

  7. AIC, BIC, Bayesian evidence against the interacting dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Szydlowski, Marek [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Jagiellonian University, Mark Kac Complex Systems Research Centre, Krakow (Poland); Krawiec, Adam [Jagiellonian University, Institute of Economics, Finance and Management, Krakow (Poland); Jagiellonian University, Mark Kac Complex Systems Research Centre, Krakow (Poland); Kurek, Aleksandra [Jagiellonian University, Astronomical Observatory, Krakow (Poland); Kamionka, Michal [University of Wroclaw, Astronomical Institute, Wroclaw (Poland)

    2015-01-01

    Recent astronomical observations have indicated that the Universe is in a phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting ΛCDM model where an interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative - the ΛCDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam's principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: type Ia supernova (Union2.1), h(z), baryon acoustic oscillation, the Alcock- Paczynski test, and the cosmic microwave background data, we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting ΛCDM model when compared to the ΛCDM model, while those based on the BIC indicated that there is strong evidence against it in favor of the ΛCDM model. Given the weak or almost non-existing support for the interacting ΛCDM model and bearing in mind Occam's razor we are inclined to reject this model. (orig.)

  8. AIC, BIC, Bayesian evidence against the interacting dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Szydłowski, Marek, E-mail: marek.szydlowski@uj.edu.pl [Astronomical Observatory, Jagiellonian University, Orla 171, 30-244, Kraków (Poland); Mark Kac Complex Systems Research Centre, Jagiellonian University, Reymonta 4, 30-059, Kraków (Poland); Krawiec, Adam, E-mail: adam.krawiec@uj.edu.pl [Institute of Economics, Finance and Management, Jagiellonian University, Łojasiewicza 4, 30-348, Kraków (Poland); Mark Kac Complex Systems Research Centre, Jagiellonian University, Reymonta 4, 30-059, Kraków (Poland); Kurek, Aleksandra, E-mail: alex@oa.uj.edu.pl [Astronomical Observatory, Jagiellonian University, Orla 171, 30-244, Kraków (Poland); Kamionka, Michał, E-mail: kamionka@astro.uni.wroc.pl [Astronomical Institute, University of Wrocław, ul. Kopernika 11, 51-622, Wrocław (Poland)

    2015-01-14

    Recent astronomical observations have indicated that the Universe is in a phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting ΛCDM model where an interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative—the ΛCDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam’s principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: type Ia supernova (Union2.1), h(z), baryon acoustic oscillation, the Alcock–Paczynski test, and the cosmic microwave background data, we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting ΛCDM model when compared to the ΛCDM model, while those based on the BIC indicated that there is strong evidence against it in favor of the ΛCDM model. Given the weak or almost non-existing support for the interacting ΛCDM model and bearing in mind Occam’s razor we are inclined to reject this model.

  9. AIC, BIC, Bayesian evidence against the interacting dark energy model

    International Nuclear Information System (INIS)

    Szydlowski, Marek; Krawiec, Adam; Kurek, Aleksandra; Kamionka, Michal

    2015-01-01

    Recent astronomical observations have indicated that the Universe is in a phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting ΛCDM model where an interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative - the ΛCDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam's principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: type Ia supernova (Union2.1), h(z), baryon acoustic oscillation, the Alcock- Paczynski test, and the cosmic microwave background data, we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting ΛCDM model when compared to the ΛCDM model, while those based on the BIC indicated that there is strong evidence against it in favor of the ΛCDM model. Given the weak or almost non-existing support for the interacting ΛCDM model and bearing in mind Occam's razor we are inclined to reject this model. (orig.)

  10. Cost-effectiveness modeling of colorectal cancer: Computed tomography colonography vs colonoscopy or fecal occult blood tests

    International Nuclear Information System (INIS)

    Lucidarme, Olivier; Cadi, Mehdi; Berger, Genevieve; Taieb, Julien; Poynard, Thierry; Grenier, Philippe; Beresniak, Ariel

    2012-01-01

    Objectives: To assess the cost-effectiveness of three colorectal-cancer (CRC) screening strategies in France: fecal-occult-blood tests (FOBT), computed-tomography-colonography (CTC) and optical-colonoscopy (OC). Methods: Ten-year simulation modeling was used to assess a virtual asymptomatic, average-risk population 50–74 years old. Negative OC was repeated 10 years later, and OC positive for advanced or non-advanced adenoma 3 or 5 years later, respectively. FOBT was repeated biennially. Negative CTC was repeated 5 years later. Positive CTC and FOBT led to triennial OC. Total cost and CRC rate after 10 years for each screening strategy and 0–100% adherence rates with 10% increments were computed. Transition probabilities were programmed using distribution ranges to account for uncertainty parameters. Direct medical costs were estimated using the French national health insurance prices. Probabilistic sensitivity analyses used 5000 Monte Carlo simulations generating model outcomes and standard deviations. Results: For a given adherence rate, CTC screening was always the most effective but not the most cost-effective. FOBT was the least effective but most cost-effective strategy. OC was of intermediate efficacy and the least cost-effective strategy. Without screening, treatment of 123 CRC per 10,000 individuals would cost €3,444,000. For 60% adherence, the respective costs of preventing and treating, respectively 49 and 74 FOBT-detected, 73 and 50 CTC-detected and 63 and 60 OC-detected CRC would be €2,810,000, €6,450,000 and €9,340,000. Conclusion: Simulation modeling helped to identify what would be the most effective (CTC) and cost-effective screening (FOBT) strategy in the setting of mass CRC screening in France.

  11. Cost and cost effectiveness of long-lasting insecticide-treated bed nets - a model-based analysis

    Directory of Open Access Journals (Sweden)

    Pulkki-Brännström Anni-Maria

    2012-04-01

    Full Text Available Abstract Background The World Health Organization recommends that national malaria programmes universally distribute long-lasting insecticide-treated bed nets (LLINs. LLINs provide effective insecticide protection for at least three years while conventional nets must be retreated every 6-12 months. LLINs may also promise longer physical durability (lifespan, but at a higher unit price. No prospective data currently available is sufficient to calculate the comparative cost effectiveness of different net types. We thus constructed a model to explore the cost effectiveness of LLINs, asking how a longer lifespan affects the relative cost effectiveness of nets, and if, when and why LLINs might be preferred to conventional insecticide-treated nets. An innovation of our model is that we also considered the replenishment need i.e. loss of nets over time. Methods We modelled the choice of net over a 10-year period to facilitate the comparison of nets with different lifespan (and/or price and replenishment need over time. Our base case represents a large-scale programme which achieves high coverage and usage throughout the population by distributing either LLINs or conventional nets through existing health services, and retreats a large proportion of conventional nets regularly at low cost. We identified the determinants of bed net programme cost effectiveness and parameter values for usage rate, delivery and retreatment cost from the literature. One-way sensitivity analysis was conducted to explicitly compare the differential effect of changing parameters such as price, lifespan, usage and replenishment need. Results If conventional and long-lasting bed nets have the same physical lifespan (3 years, LLINs are more cost effective unless they are priced at more than USD 1.5 above the price of conventional nets. Because a longer lifespan brings delivery cost savings, each one year increase in lifespan can be accompanied by a USD 1 or more increase in price

  12. Towards port sustainability through probabilistic models: Bayesian networks

    Directory of Open Access Journals (Sweden)

    B. Molina

    2018-04-01

    Full Text Available It is necessary that a manager of an infrastructure knows relations between variables. Using Bayesian networks, variables can be classified, predicted and diagnosed, being able to estimate posterior probability of the unknown ones based on known ones. The proposed methodology has generated a database with port variables, which have been classified as economic, social, environmental and institutional, as addressed in of smart ports studies made in all Spanish Port System. Network has been developed using an acyclic directed graph, which have let us know relationships in terms of parents and sons. In probabilistic terms, it can be concluded from the constructed network that the most decisive variables for port sustainability are those that are part of the institutional dimension. It has been concluded that Bayesian networks allow modeling uncertainty probabilistically even when the number of variables is high as it occurs in port planning and exploitation.

  13. Bayesian geostatistical modeling of leishmaniasis incidence in Brazil.

    Directory of Open Access Journals (Sweden)

    Dimitrios-Alexios Karagiannis-Voules

    Full Text Available BACKGROUND: Leishmaniasis is endemic in 98 countries with an estimated 350 million people at risk and approximately 2 million cases annually. Brazil is one of the most severely affected countries. METHODOLOGY: We applied Bayesian geostatistical negative binomial models to analyze reported incidence data of cutaneous and visceral leishmaniasis in Brazil covering a 10-year period (2001-2010. Particular emphasis was placed on spatial and temporal patterns. The models were fitted using integrated nested Laplace approximations to perform fast approximate Bayesian inference. Bayesian variable selection was employed to determine the most important climatic, environmental, and socioeconomic predictors of cutaneous and visceral leishmaniasis. PRINCIPAL FINDINGS: For both types of leishmaniasis, precipitation and socioeconomic proxies were identified as important risk factors. The predicted number of cases in 2010 were 30,189 (standard deviation [SD]: 7,676 for cutaneous leishmaniasis and 4,889 (SD: 288 for visceral leishmaniasis. Our risk maps predicted the highest numbers of infected people in the states of Minas Gerais and Pará for visceral and cutaneous leishmaniasis, respectively. CONCLUSIONS/SIGNIFICANCE: Our spatially explicit, high-resolution incidence maps identified priority areas where leishmaniasis control efforts should be targeted with the ultimate goal to reduce disease incidence.

  14. Cost-effectiveness of a school-based health promotion program in Canada: A life-course modeling approach.

    Directory of Open Access Journals (Sweden)

    John Paul Ekwaru

    Full Text Available The Alberta Project Promoting active Living and healthy Eating in Schools (APPLE Schools has been recognized as a "best practice" in preventing childhood obesity. To inform decision making on the economic implications of APPLE Schools and to justify investment, we evaluated the project's cost-effectiveness following a life-course approach.We developed a state transition model for the lifetime progression of body weight status comparing elementary school students attending APPLE Schools and control schools. This model quantified the lifetime impact of APPLE Schools in terms of prevention of excess body weight, chronic disease and improved quality-adjusted life years (QALY, from a school system's cost perspective. Both costs and health outcomes were discounted to their present value using 3% discount rate.The incremental cost-effectiveness ratio(ICER of APPLE schools was CA$33,421 per QALY gained, and CA$1,555, CA$1,709 and CA$14,218 per prevented person years of excess weight, obesity and chronic disease, respectively. These estimates show that APPLE Schools is cost effective at a threshold of ICER < CA$50,000. In probabilistic sensitivity analysis, APPLE Schools was cost effective more than 64% of the time per QALY gained, when using a threshold of ICERcost-effective intervention for obesity prevention and reduction of chronic disease risk over the lifetime. Expanding the coverage and allocating resources towards school-based programs like the APPLE Schools program, is likely to reduce the public health burden of obesity and chronic diseases.

  15. Cost-effectiveness of a school-based health promotion program in Canada: A life-course modeling approach.

    Science.gov (United States)

    Ekwaru, John Paul; Ohinmaa, Arto; Tran, Bach Xuan; Setayeshgar, Solmaz; Johnson, Jeffrey A; Veugelers, Paul J

    2017-01-01

    The Alberta Project Promoting active Living and healthy Eating in Schools (APPLE Schools) has been recognized as a "best practice" in preventing childhood obesity. To inform decision making on the economic implications of APPLE Schools and to justify investment, we evaluated the project's cost-effectiveness following a life-course approach. We developed a state transition model for the lifetime progression of body weight status comparing elementary school students attending APPLE Schools and control schools. This model quantified the lifetime impact of APPLE Schools in terms of prevention of excess body weight, chronic disease and improved quality-adjusted life years (QALY), from a school system's cost perspective. Both costs and health outcomes were discounted to their present value using 3% discount rate. The incremental cost-effectiveness ratio(ICER) of APPLE schools was CA$33,421 per QALY gained, and CA$1,555, CA$1,709 and CA$14,218 per prevented person years of excess weight, obesity and chronic disease, respectively. These estimates show that APPLE Schools is cost effective at a threshold of ICER Schools was cost effective more than 64% of the time per QALY gained, when using a threshold of ICERSchool-based health promotion, such as APPLE Schools is a cost-effective intervention for obesity prevention and reduction of chronic disease risk over the lifetime. Expanding the coverage and allocating resources towards school-based programs like the APPLE Schools program, is likely to reduce the public health burden of obesity and chronic diseases.

  16. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  17. Bayesian model discrimination for glucose-insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Brooks, Stephen P.; Højbjerre, Malene

    In this paper we analyse a set of experimental data on a number of healthy and diabetic patients and discuss a variety of models for describing the physiological processes involved in glucose absorption and insulin secretion within the human body. We adopt a Bayesian approach which facilitates...... as parameter uncertainty. Markov chain Monte Carlo methods are used, combining Metropolis Hastings, reversible jump and simulated tempering updates to provide rapidly mixing chains so as to provide robust inference. We demonstrate the methodology for both healthy and type II diabetic populations concluding...... that whilst both populations are well modelled by a common insulin model, their glucose dynamics differ considerably....

  18. Bayesian modeling to paired comparison data via the Pareto distribution

    Directory of Open Access Journals (Sweden)

    Nasir Abbas

    2017-12-01

    Full Text Available A probabilistic approach to build models for paired comparison experiments based on the comparison of two Pareto variables is considered. Analysis of the proposed model is carried out in classical as well as Bayesian frameworks. Informative and uninformative priors are employed to accommodate the prior information. Simulation study is conducted to assess the suitablily and performance of the model under theoretical conditions. Appropriateness of fit of the is also carried out. Entire inferential procedure is illustrated by comparing certain cricket teams using real dataset.

  19. Modelling Common Agricultural Policy-Water Framework Directive interactions and cost-effectiveness of measures to reduce nitrogen pollution.

    Science.gov (United States)

    Mouratiadou, Ioanna; Russell, Graham; Topp, Cairistiona; Louhichi, Kamel; Moran, Dominic

    2010-01-01

    Selecting cost-effective measures to regulate agricultural water pollution to conform to the Water Framework Directive presents multiple challenges. A bio-economic modelling approach is presented that has been used to explore the water quality and economic effects of the 2003 Common Agricultural Policy Reform and to assess the cost-effectiveness of input quotas and emission standards against nitrate leaching, in a representative case study catchment in Scotland. The approach combines a biophysical model (NDICEA) with a mathematical programming model (FSSIM-MP). The results indicate only small changes due to the Reform, with the main changes in farmers' decision making and the associated economic and water quality indicators depending on crop price changes, and suggest the use of target fertilisation in relation to crop and soil requirements, as opposed to measures targeting farm total or average nitrogen use.

  20. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuška, Ivo

    2016-02-23

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. To assess the robustness of the estimation of the quantile functions, we obtain bootstrap confidence bands by stratified resampling with respect to the cycle ratio. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions. We implement and apply Bayesian model comparison methods, such as Bayes factor ranking and predictive information criteria based on cross-validation techniques under various a priori scenarios.

  1. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuška, Ivo; Sawlan, Zaid A; Scavino, Marco; Szabó , Barna; Tempone, Raul

    2016-01-01

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. To assess the robustness of the estimation of the quantile functions, we obtain bootstrap confidence bands by stratified resampling with respect to the cycle ratio. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions. We implement and apply Bayesian model comparison methods, such as Bayes factor ranking and predictive information criteria based on cross-validation techniques under various a priori scenarios.

  2. Predicting coastal cliff erosion using a Bayesian probabilistic model

    Science.gov (United States)

    Hapke, Cheryl J.; Plant, Nathaniel G.

    2010-01-01

    Regional coastal cliff retreat is difficult to model due to the episodic nature of failures and the along-shore variability of retreat events. There is a growing demand, however, for predictive models that can be used to forecast areas vulnerable to coastal erosion hazards. Increasingly, probabilistic models are being employed that require data sets of high temporal density to define the joint probability density function that relates forcing variables (e.g. wave conditions) and initial conditions (e.g. cliff geometry) to erosion events. In this study we use a multi-parameter Bayesian network to investigate correlations between key variables that control and influence variations in cliff retreat processes. The network uses Bayesian statistical methods to estimate event probabilities using existing observations. Within this framework, we forecast the spatial distribution of cliff retreat along two stretches of cliffed coast in Southern California. The input parameters are the height and slope of the cliff, a descriptor of material strength based on the dominant cliff-forming lithology, and the long-term cliff erosion rate that represents prior behavior. The model is forced using predicted wave impact hours. Results demonstrate that the Bayesian approach is well-suited to the forward modeling of coastal cliff retreat, with the correct outcomes forecast in 70–90% of the modeled transects. The model also performs well in identifying specific locations of high cliff erosion, thus providing a foundation for hazard mapping. This approach can be employed to predict cliff erosion at time-scales ranging from storm events to the impacts of sea-level rise at the century-scale.

  3. DPpackage: Bayesian Semi- and Nonparametric Modeling in R

    Directory of Open Access Journals (Sweden)

    Alejandro Jara

    2011-04-01

    Full Text Available Data analysis sometimes requires the relaxation of parametric assumptions in order to gain modeling flexibility and robustness against mis-specification of the probability model. In the Bayesian context, this is accomplished by placing a prior distribution on a function space, such as the space of all probability distributions or the space of all regression functions. Unfortunately, posterior distributions ranging over function spaces are highly complex and hence sampling methods play a key role. This paper provides an introduction to a simple, yet comprehensive, set of programs for the implementation of some Bayesian nonparametric and semiparametric models in R, DPpackage. Currently, DPpackage includes models for marginal and conditional density estimation, receiver operating characteristic curve analysis, interval-censored data, binary regression data, item response data, longitudinal and clustered data using generalized linear mixed models, and regression data using generalized additive models. The package also contains functions to compute pseudo-Bayes factors for model comparison and for eliciting the precision parameter of the Dirichlet process prior, and a general purpose Metropolis sampling algorithm. To maximize computational efficiency, the actual sampling for each model is carried out using compiled C, C++ or Fortran code.

  4. Cost-effectiveness of enhanced syphilis screening among HIV-positive men who have sex with men: a microsimulation model.

    Directory of Open Access Journals (Sweden)

    Ashleigh R Tuite

    Full Text Available Syphilis co-infection risk has increased substantially among HIV-infected men who have sex with men (MSM. Frequent screening for syphilis and treatment of men who test positive might be a practical means of controlling the risk of infection and disease sequelae in this population.We evaluated the cost-effectiveness of strategies that increased the frequency and population coverage of syphilis screening in HIV-infected MSM receiving HIV care, relative to current standard of care.We developed a state-transition microsimulation model of syphilis natural history and medical care in HIV-infected MSM receiving care for HIV. We performed Monte Carlo simulations using input data derived from a large observational cohort in Ontario, Canada, and from published biomedical literature. Simulations compared usual care (57% of the population screened annually to different combinations of more frequent (3- or 6-monthly screening and higher coverage (100% screened. We estimated expected disease-specific outcomes, quality-adjusted survival, costs, and cost-effectiveness associated with each strategy from the perspective of a public health care payer.Usual care was more costly and less effective than strategies with more frequent or higher coverage screening. Higher coverage strategies (with screening frequency of 3 or 6 months were expected to be cost-effective based on usually cited willingness-to-pay thresholds. These findings were robust in the face of probabilistic sensitivity analyses, alternate cost-effectiveness thresholds, and alternate assumptions about duration of risk, program characteristics, and management of underlying HIV.We project that higher coverage and more frequent syphilis screening of HIV-infected MSM would be a highly cost-effective health intervention, with many potentially viable screening strategies projected to both save costs and improve health when compared to usual care. The baseline requirement for regular blood testing in this

  5. Cost-Effectiveness of Enhanced Syphilis Screening among HIV-Positive Men Who Have Sex with Men: A Microsimulation Model

    Science.gov (United States)

    Tuite, Ashleigh R.; Burchell, Ann N.; Fisman, David N.

    2014-01-01

    Background Syphilis co-infection risk has increased substantially among HIV-infected men who have sex with men (MSM). Frequent screening for syphilis and treatment of men who test positive might be a practical means of controlling the risk of infection and disease sequelae in this population. Purpose We evaluated the cost-effectiveness of strategies that increased the frequency and population coverage of syphilis screening in HIV-infected MSM receiving HIV care, relative to current standard of care. Methods We developed a state-transition microsimulation model of syphilis natural history and medical care in HIV-infected MSM receiving care for HIV. We performed Monte Carlo simulations using input data derived from a large observational cohort in Ontario, Canada, and from published biomedical literature. Simulations compared usual care (57% of the population screened annually) to different combinations of more frequent (3- or 6-monthly) screening and higher coverage (100% screened). We estimated expected disease-specific outcomes, quality-adjusted survival, costs, and cost-effectiveness associated with each strategy from the perspective of a public health care payer. Results Usual care was more costly and less effective than strategies with more frequent or higher coverage screening. Higher coverage strategies (with screening frequency of 3 or 6 months) were expected to be cost-effective based on usually cited willingness-to-pay thresholds. These findings were robust in the face of probabilistic sensitivity analyses, alternate cost-effectiveness thresholds, and alternate assumptions about duration of risk, program characteristics, and management of underlying HIV. Conclusions We project that higher coverage and more frequent syphilis screening of HIV-infected MSM would be a highly cost-effective health intervention, with many potentially viable screening strategies projected to both save costs and improve health when compared to usual care. The baseline requirement

  6. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  7. Nonparametric Bayesian models for a spatial covariance.

    Science.gov (United States)

    Reich, Brian J; Fuentes, Montserrat

    2012-01-01

    A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.

  8. Two Bayesian tests of the GLOMOsys Model.

    Science.gov (United States)

    Field, Sarahanne M; Wagenmakers, Eric-Jan; Newell, Ben R; Zeelenberg, René; van Ravenzwaaij, Don

    2016-12-01

    Priming is arguably one of the key phenomena in contemporary social psychology. Recent retractions and failed replication attempts have led to a division in the field between proponents and skeptics and have reinforced the importance of confirming certain priming effects through replication. In this study, we describe the results of 2 preregistered replication attempts of 1 experiment by Förster and Denzler (2012). In both experiments, participants first processed letters either globally or locally, then were tested using a typicality rating task. Bayes factor hypothesis tests were conducted for both experiments: Experiment 1 (N = 100) yielded an indecisive Bayes factor of 1.38, indicating that the in-lab data are 1.38 times more likely to have occurred under the null hypothesis than under the alternative. Experiment 2 (N = 908) yielded a Bayes factor of 10.84, indicating strong support for the null hypothesis that global priming does not affect participants' mean typicality ratings. The failure to replicate this priming effect challenges existing support for the GLOMO sys model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    Science.gov (United States)

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Modeling Women's Menstrual Cycles using PICI Gates in Bayesian Network.

    Science.gov (United States)

    Zagorecki, Adam; Łupińska-Dubicka, Anna; Voortman, Mark; Druzdzel, Marek J

    2016-03-01

    A major difficulty in building Bayesian network (BN) models is the size of conditional probability tables, which grow exponentially in the number of parents. One way of dealing with this problem is through parametric conditional probability distributions that usually require only a number of parameters that is linear in the number of parents. In this paper, we introduce a new class of parametric models, the Probabilistic Independence of Causal Influences (PICI) models, that aim at lowering the number of parameters required to specify local probability distributions, but are still capable of efficiently modeling a variety of interactions. A subset of PICI models is decomposable and this leads to significantly faster inference as compared to models that cannot be decomposed. We present an application of the proposed method to learning dynamic BNs for modeling a woman's menstrual cycle. We show that PICI models are especially useful for parameter learning from small data sets and lead to higher parameter accuracy than when learning CPTs.

  11. Modelling the cost-effectiveness of public awareness campaigns for the early detection of non-small-cell lung cancer.

    Science.gov (United States)

    Hinde, S; McKenna, C; Whyte, S; Peake, M D; Callister, M E J; Rogers, T; Sculpher, M

    2015-06-30

    Survival rates in lung cancer in England are significantly lower than in many similar countries. A range of Be Clear on Cancer (BCOC) campaigns have been conducted targeting lung cancer and found to improve the proportion of diagnoses at the early stage of disease. This paper considers the cost-effectiveness of such campaigns, evaluating the effect of both the regional and national BCOC campaigns on the stage distribution of non-small-cell lung cancer (NSCLC) at diagnosis. A natural history model of NSCLC was developed using incidence data, data elicited from clinical experts and model calibration techniques. This structure is used to consider the lifetime cost and quality-adjusted survival implications of the early awareness campaigns. Incremental cost-effectiveness ratios (ICERs) in terms of additional costs per quality-adjusted life-years (QALYs) gained are presented. Two scenario analyses were conducted to investigate the role of changes in the 'worried-well' population and the route of diagnosis that might occur as a result of the campaigns. The base-case theoretical model found the regional and national early awareness campaigns to be associated with QALY gains of 289 and 178 QALYs and ICERs of £13 660 and £18 173 per QALY gained, respectively. The scenarios found that increases in the 'worried-well' population may impact the cost-effectiveness conclusions. Subject to the available evidence, the analysis suggests that early awareness campaigns in lung cancer have the potential to be cost-effective. However, significant additional research is required to address many of the limitations of this study. In addition, the estimated natural history model presents previously unavailable estimates of the prevalence and rate of disease progression in the undiagnosed population.

  12. Computed tomographic colonography to screen for colorectal cancer, extracolonic cancer, and aortic aneurysm: model simulation with cost-effectiveness analysis.

    Science.gov (United States)

    Hassan, Cesare; Pickhardt, Perry J; Pickhardt, Perry; Laghi, Andrea; Kim, Daniel H; Kim, Daniel; Zullo, Angelo; Iafrate, Franco; Di Giulio, Lorenzo; Morini, Sergio

    2008-04-14

    In addition to detecting colorectal neoplasia, abdominal computed tomography (CT) with colonography technique (CTC) can also detect unsuspected extracolonic cancers and abdominal aortic aneurysms (AAA).The efficacy and cost-effectiveness of this combined abdominal CT screening strategy are unknown. A computerized Markov model was constructed to simulate the occurrence of colorectal neoplasia, extracolonic malignant neoplasm, and AAA in a hypothetical cohort of 100,000 subjects from the United States who were 50 years of age. Simulated screening with CTC, using a 6-mm polyp size threshold for reporting, was compared with a competing model of optical colonoscopy (OC), both without and with abdominal ultrasonography for AAA detection (OC-US strategy). In the simulated population, CTC was the dominant screening strategy, gaining an additional 1458 and 462 life-years compared with the OC and OC-US strategies and being less costly, with a savings of $266 and $449 per person, respectively. The additional gains for CTC were largely due to a decrease in AAA-related deaths, whereas the modeled benefit from extracolonic cancer downstaging was a relatively minor factor. At sensitivity analysis, OC-US became more cost-effective only when the CTC sensitivity for large polyps dropped to 61% or when broad variations of costs were simulated, such as an increase in CTC cost from $814 to $1300 or a decrease in OC cost from $1100 to $500. With the OC-US approach, suboptimal compliance had a strong negative influence on efficacy and cost-effectiveness. The estimated mortality from CT-induced cancer was less than estimated colonoscopy-related mortality (8 vs 22 deaths), both of which were minor compared with the positive benefit from screening. When detection of extracolonic findings such as AAA and extracolonic cancer are considered in addition to colorectal neoplasia in our model simulation, CT colonography is a dominant screening strategy (ie, more clinically effective and more cost-effective

  13. Robust Bayesian Experimental Design for Conceptual Model Discrimination

    Science.gov (United States)

    Pham, H. V.; Tsai, F. T. C.

    2015-12-01

    A robust Bayesian optimal experimental design under uncertainty is presented to provide firm information for model discrimination, given the least number of pumping wells and observation wells. Firm information is the maximum information of a system can be guaranteed from an experimental design. The design is based on the Box-Hill expected entropy decrease (EED) before and after the experiment design and the Bayesian model averaging (BMA) framework. A max-min programming is introduced to choose the robust design that maximizes the minimal Box-Hill EED subject to that the highest expected posterior model probability satisfies a desired probability threshold. The EED is calculated by the Gauss-Hermite quadrature. The BMA method is used to predict future observations and to quantify future observation uncertainty arising from conceptual and parametric uncertainties in calculating EED. Monte Carlo approach is adopted to quantify the uncertainty in the posterior model probabilities. The optimal experimental design is tested by a synthetic 5-layer anisotropic confined aquifer. Nine conceptual groundwater models are constructed due to uncertain geological architecture and boundary condition. High-performance computing is used to enumerate all possible design solutions in order to identify the most plausible groundwater model. Results highlight the impacts of scedasticity in future observation data as well as uncertainty sources on potential pumping and observation locations.

  14. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  15. Modeling the cost-effectiveness of infant vaccination with pneumococcal conjugate vaccines in Germany.

    Science.gov (United States)

    Kuhlmann, Alexander; von der Schulenburg, J-Matthias Graf

    2017-04-01

    In 2009, the European Medicines Agency granted approval for two higher-valent pneumococcal conjugate vaccines. This study aims to evaluate the cost-effectiveness of universal infant (historical vaccination scheme in infants as well as indirect herd effects and replacement disease. We used German epidemiological data to calculate episodes of IPD, PNE, and AOM, as well as direct and indirect effects of the vaccination. Parameter uncertainty was tested in univariate and probabilistic sensitivity analyses. In the base-case analysis, the ICER of PCV13 versus PCV10 infant vaccination was EUR 9826 per quality-adjusted life-year (QALY) gained or EUR 5490 per life-year (LY) gained from the societal perspective and EUR 3368 per QALY gained or EUR 1882 per LY gained from the perspective of the German statutory health insurance. The results were particularly sensitive to the magnitude of indirect effects of both vaccines. Universal infant vaccination with PCV13 is likely to be a cost-effective intervention compared with PCV10 within the German health care system, if additional net indirect effects of PCV13 vaccination are significant.

  16. A predictive ligand-based Bayesian model for human drug-induced liver injury.

    Science.gov (United States)

    Ekins, Sean; Williams, Antony J; Xu, Jinghai J

    2010-12-01

    Drug-induced liver injury (DILI) is one of the most important reasons for drug development failure at both preapproval and postapproval stages. There has been increased interest in developing predictive in vivo, in vitro, and in silico models to identify compounds that cause idiosyncratic hepatotoxicity. In the current study, we applied machine learning, a Bayesian modeling method with extended connectivity fingerprints and other interpretable descriptors. The model that was developed and internally validated (using a training set of 295 compounds) was then applied to a large test set relative to the training set (237 compounds) for external validation. The resulting concordance of 60%, sensitivity of 56%, and specificity of 67% were comparable to results for internal validation. The Bayesian model with extended connectivity functional class fingerprints of maximum diameter 6 (ECFC_6) and interpretable descriptors suggested several substructures that are chemically reactive and may also be important for DILI-causing compounds, e.g., ketones, diols, and α-methyl styrene type structures. Using Smiles Arbitrary Target Specification (SMARTS) filters published by several pharmaceutical companies, we evaluated whether such reactive substructures could be readily detected by any of the published filters. It was apparent that the most stringent filters used in this study, such as the Abbott alerts, which captures thiol traps and other compounds, may be of use in identifying DILI-causing compounds (sensitivity 67%). A significant outcome of the present study is that we provide predictions for many compounds that cause DILI by using the knowledge we have available from previous studies. These computational models may represent cost-effective selection criteria before in vitro or in vivo experimental studies.

  17. Modeling operational risks of the nuclear industry with Bayesian networks

    International Nuclear Information System (INIS)

    Wieland, Patricia; Lustosa, Leonardo J.

    2009-01-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  18. Modeling operational risks of the nuclear industry with Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Wieland, Patricia [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial; Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)], e-mail: pwieland@cnen.gov.br; Lustosa, Leonardo J. [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial], e-mail: ljl@puc-rio.br

    2009-07-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  19. Bayesian modeling of the mass and density of asteroids

    Science.gov (United States)

    Dotson, Jessie L.; Mathias, Donovan

    2017-10-01

    Mass and density are two of the fundamental properties of any object. In the case of near earth asteroids, knowledge about the mass of an asteroid is essential for estimating the risk due to (potential) impact and planning possible mitigation options. The density of an asteroid can illuminate the structure of the asteroid. A low density can be indicative of a rubble pile structure whereas a higher density can imply a monolith and/or higher metal content. The damage resulting from an impact of an asteroid with Earth depends on its interior structure in addition to its total mass, and as a result, density is a key parameter to understanding the risk of asteroid impact. Unfortunately, measuring the mass and density of asteroids is challenging and often results in measurements with large uncertainties. In the absence of mass / density measurements for a specific object, understanding the range and distribution of likely values can facilitate probabilistic assessments of structure and impact risk. Hierarchical Bayesian models have recently been developed to investigate the mass - radius relationship of exoplanets (Wolfgang, Rogers & Ford 2016) and to probabilistically forecast the mass of bodies large enough to establish hydrostatic equilibrium over a range of 9 orders of magnitude in mass (from planemos to main sequence stars; Chen & Kipping 2017). Here, we extend this approach to investigate the mass and densities of asteroids. Several candidate Bayesian models are presented, and their performance is assessed relative to a synthetic asteroid population. In addition, a preliminary Bayesian model for probablistically forecasting masses and densities of asteroids is presented. The forecasting model is conditioned on existing asteroid data and includes observational errors, hyper-parameter uncertainties and intrinsic scatter.

  20. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  1. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  2. Cost-effectiveness of investing in sidewalks as a means of increasing physical activity: a RESIDE modelling study.

    Science.gov (United States)

    Veerman, J Lennert; Zapata-Diomedi, Belen; Gunn, Lucy; McCormack, Gavin R; Cobiac, Linda J; Mantilla Herrera, Ana Maria; Giles-Corti, Billie; Shiell, Alan

    2016-09-20

    Studies consistently find that supportive neighbourhood built environments increase physical activity by encouraging walking and cycling. However, evidence on the cost-effectiveness of investing in built environment interventions as a means of promoting physical activity is lacking. In this study, we assess the cost-effectiveness of increasing sidewalk availability as one means of encouraging walking. Using data from the RESIDE study in Perth, Australia, we modelled the cost impact and change in health-adjusted life years (HALYs) of installing additional sidewalks in established neighbourhoods. Estimates of the relationship between sidewalk availability and walking were taken from a previous study. Multistate life table models were used to estimate HALYs associated with changes in walking frequency and duration. Sensitivity analyses were used to explore the impact of variations in population density, discount rates, sidewalk costs and the inclusion of unrelated healthcare costs in added life years. Installing and maintaining an additional 10 km of sidewalk in an average neighbourhood with 19 000 adult residents was estimated to cost A$4.2 million over 30 years and gain 24 HALYs over the lifetime of an average neighbourhood adult resident population. The incremental cost-effectiveness ratio was A$176 000/HALY. However, sensitivity results indicated that increasing population densities improves cost-effectiveness. In low-density cities such as in Australia, installing sidewalks in established neighbourhoods as a single intervention is unlikely to cost-effectively improve health. Sidewalks must be considered alongside other complementary elements of walkability, such as density, land use mix and street connectivity. Population density is particularly important because at higher densities, more residents are exposed and this improves the cost-effectiveness. Health gain is one of many benefits of enhancing neighbourhood walkability and future studies might

  3. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  4. Airline Sustainability Modeling: A New Framework with Application of Bayesian Structural Equation Modeling

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    2016-11-01

    Full Text Available There are many factors which could influence the sustainability of airlines. The main purpose of this study is to introduce a framework for a financial sustainability index and model it based on structural equation modeling (SEM with maximum likelihood and Bayesian predictors. The introduced framework includes economic performance, operational performance, cost performance, and financial performance. Based on both Bayesian SEM (Bayesian-SEM and Classical SEM (Classical-SEM, it was found that economic performance with both operational performance and cost performance are significantly related to the financial performance index. The four mathematical indices employed are root mean square error, coefficient of determination, mean absolute error, and mean absolute percentage error to compare the efficiency of Bayesian-SEM and Classical-SEM in predicting the airline financial performance. The outputs confirmed that the framework with Bayesian prediction delivered a good fit with the data, although the framework predicted with a Classical-SEM approach did not prepare a well-fitting model. The reasons for this discrepancy between Classical and Bayesian predictions, as well as the potential advantages and caveats with the application of Bayesian approach in airline sustainability studies, are debated.

  5. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    Science.gov (United States)

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  6. From qualitative reasoning models to Bayesian-based learner modeling

    NARCIS (Netherlands)

    Milošević, U.; Bredeweg, B.; de Kleer, J.; Forbus, K.D.

    2010-01-01

    Assessing the knowledge of a student is a fundamental part of intelligent learning environments. We present a Bayesian network based approach to dealing with uncertainty when estimating a learner’s state of knowledge in the context of Qualitative Reasoning (QR). A proposal for a global architecture

  7. Development of a cyber security risk model using Bayesian networks

    International Nuclear Information System (INIS)

    Shin, Jinsoo; Son, Hanseong; Khalil ur, Rahman; Heo, Gyunyoung

    2015-01-01

    Cyber security is an emerging safety issue in the nuclear industry, especially in the instrumentation and control (I and C) field. To address the cyber security issue systematically, a model that can be used for cyber security evaluation is required. In this work, a cyber security risk model based on a Bayesian network is suggested for evaluating cyber security for nuclear facilities in an integrated manner. The suggested model enables the evaluation of both the procedural and technical aspects of cyber security, which are related to compliance with regulatory guides and system architectures, respectively. The activity-quality analysis model was developed to evaluate how well people and/or organizations comply with the regulatory guidance associated with cyber security. The architecture analysis model was created to evaluate vulnerabilities and mitigation measures with respect to their effect on cyber security. The two models are integrated into a single model, which is called the cyber security risk model, so that cyber security can be evaluated from procedural and technical viewpoints at the same time. The model was applied to evaluate the cyber security risk of the reactor protection system (RPS) of a research reactor and to demonstrate its usefulness and feasibility. - Highlights: • We developed the cyber security risk model can be find the weak point of cyber security integrated two cyber analysis models by using Bayesian Network. • One is the activity-quality model signifies how people and/or organization comply with the cyber security regulatory guide. • Other is the architecture model represents the probability of cyber-attack on RPS architecture. • The cyber security risk model can provide evidence that is able to determine the key element for cyber security for RPS of a research reactor

  8. Quantum-Like Bayesian Networks for Modeling Decision Making

    Directory of Open Access Journals (Sweden)

    Catarina eMoreira

    2016-01-01

    Full Text Available In this work, we explore an alternative quantum structure to perform quantum probabilistic inferences to accommodate the paradoxical findings of the Sure Thing Principle. We propose a Quantum-Like Bayesian Network, which consists in replacing classical probabilities by quantum probability amplitudes. However, since this approach suffers from the problem of exponential growth of quantum parameters, we also propose a similarity heuristic that automatically fits quantum parameters through vector similarities. This makes the proposed model general and predictive in contrast to the current state of the art models, which cannot be generalized for more complex decision scenarios and that only provide an explanatory nature for the observed paradoxes. In the end, the model that we propose consists in a nonparametric method for estimating inference effects from a statistical point of view. It is a statistical model that is simpler than the previous quantum dynamic and quantum-like models proposed in the literature. We tested the proposed network with several empirical data from the literature, mainly from the Prisoner's Dilemma game and the Two Stage Gambling game. The results obtained show that the proposed quantum Bayesian Network is a general method that can accommodate violations of the laws of classical probability theory and make accurate predictions regarding human decision-making in these scenarios.

  9. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Bayesian semiparametric regression models to characterize molecular evolution

    Directory of Open Access Journals (Sweden)

    Datta Saheli

    2012-10-01

    Full Text Available Abstract Background Statistical models and methods that associate changes in the physicochemical properties of amino acids with natural selection at the molecular level typically do not take into account the correlations between such properties. We propose a Bayesian hierarchical regression model with a generalization of the Dirichlet process prior on the distribution of the regression coefficients that describes the relationship between the changes in amino acid distances and natural selection in protein-coding DNA sequence alignments. Results The Bayesian semiparametric approach is illustrated with simulated data and the abalone lysin sperm data. Our method identifies groups of properties which, for this particular dataset, have a similar effect on evolution. The model also provides nonparametric site-specific estimates for the strength of conservation of these properties. Conclusions The model described here is distinguished by its ability to handle a large number of amino acid properties simultaneously, while taking into account that such data can be correlated. The multi-level clustering ability of the model allows for appealing interpretations of the results in terms of properties that are roughly equivalent from the standpoint of molecular evolution.

  11. Uncertainty in decision models analyzing cost-effectiveness : The joint distribution of incremental costs and effectiveness evaluated with a nonparametric bootstrap method

    NARCIS (Netherlands)

    Hunink, Maria; Bult, J.R.; De Vries, J; Weinstein, MC

    1998-01-01

    Purpose. To illustrate the use of a nonparametric bootstrap method in the evaluation of uncertainty in decision models analyzing cost-effectiveness. Methods. The authors reevaluated a previously published cost-effectiveness analysis that used a Markov model comparing initial percutaneous

  12. Cervical cancer treatment costs and cost-effectiveness analysis of human papillomavirus vaccination in Vietnam: a PRIME modeling study.

    Science.gov (United States)

    Van Minh, Hoang; My, Nguyen Thi Tuyet; Jit, Mark

    2017-05-15

    Cervical cancer is currently the leading cause of cancer mortality among women in South Vietnam and the second leading cause of cancer mortality in North Vietnam. Human papillomavirus (HPV) vaccination has the potential to substantially decrease this burden. The World Health Organization (WHO) recommends that a cost-effectiveness analysis of HPV vaccination is conducted before nationwide introduction. The Papillomavirus Rapid Interface for Modeling and Economics (PRIME) model was used to evaluate the cost-effectiveness of HPV vaccine introduction. A costing study based on expert panel discussions, interviews and hospital case note reviews was conducted to explore the cost of cervical cancer care. The cost of cervical cancer treatment ranged from US$368 - 11400 depending on the type of hospital and treatment involved. Under Gavi-negotiated prices of US$4.55, HPV vaccination is likely to be very cost-effective with an incremental cost per disability-adjusted life year (DALY) averted in the range US$780 - 1120. However, under list prices for Cervarix and Gardasil in Vietnam, the incremental cost per DALY averted for HPV vaccination can exceed US$8000. HPV vaccine introduction appears to be economically attractive only if Vietnam is able to procure the vaccine at Gavi prices. This highlights the importance of initiating a nationwide vaccination programme while such prices are still available.

  13. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  14. Assessing global vegetation activity using spatio-temporal Bayesian modelling

    Science.gov (United States)

    Mulder, Vera L.; van Eck, Christel M.; Friedlingstein, Pierre; Regnier, Pierre A. G.

    2016-04-01

    This work demonstrates the potential of modelling vegetation activity using a hierarchical Bayesian spatio-temporal model. This approach allows modelling changes in vegetation and climate simultaneous in space and time. Changes of vegetation activity such as phenology are modelled as a dynamic process depending on climate variability in both space and time. Additionally, differences in observed vegetation status can be contributed to other abiotic ecosystem properties, e.g. soil and terrain properties. Although these properties do not change in time, they do change in space and may provide valuable information in addition to the climate dynamics. The spatio-temporal Bayesian models were calibrated at a regional scale because the local trends in space and time can be better captured by the model. The regional subsets were defined according to the SREX segmentation, as defined by the IPCC. Each region is considered being relatively homogeneous in terms of large-scale climate and biomes, still capturing small-scale (grid-cell level) variability. Modelling within these regions is hence expected to be less uncertain due to the absence of these large-scale patterns, compared to a global approach. This overall modelling approach allows the comparison of model behavior for the different regions and may provide insights on the main dynamic processes driving the interaction between vegetation and climate within different regions. The data employed in this study encompasses the global datasets for soil properties (SoilGrids), terrain properties (Global Relief Model based on SRTM DEM and ETOPO), monthly time series of satellite-derived vegetation indices (GIMMS NDVI3g) and climate variables (Princeton Meteorological Forcing Dataset). The findings proved the potential of a spatio-temporal Bayesian modelling approach for assessing vegetation dynamics, at a regional scale. The observed interrelationships of the employed data and the different spatial and temporal trends support

  15. Theory-based Bayesian models of inductive learning and reasoning.

    Science.gov (United States)

    Tenenbaum, Joshua B; Griffiths, Thomas L; Kemp, Charles

    2006-07-01

    Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.

  16. Factors affecting GEBV accuracy with single-step Bayesian models.

    Science.gov (United States)

    Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng

    2018-01-01

    A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.

  17. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  18. Approximate Bayesian computation for forward modeling in cosmology

    International Nuclear Information System (INIS)

    Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar

    2015-01-01

    Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release

  19. On-line Bayesian model updating for structural health monitoring

    Science.gov (United States)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  20. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  1. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  2. Cost-effectiveness of Security Measures: A model-based Framework

    DEFF Research Database (Denmark)

    Pieters, Wolter; Probst, Christian W.; Lukszo, Zofia

    2014-01-01

    Recently, cyber security has become an important topic on the agenda of many organisations. It is already widely acknowledged that attacks do happen, and decision makers face the problem of how to respond. As it is almost impossible to secure a complex system completely, it is important to have...... an adequate estimate of the effectiveness of security measures when making investment decisions. Risk concepts are known in principle, but estimating the effectiveness of countermeasure proves to be difficult and cannot be achieved by qualitative approaches only. In this chapter, the authors consider...... the question of how to guarantee cost-effectiveness of security measures. They investigate the possibility of using existing frameworks and tools, the challenges in a security context as opposed to a safety context, and directions for future research....

  3. Cost effectiveness of a radiation therapy simulator: a model for the determination of need

    International Nuclear Information System (INIS)

    Dritschilo, A.; Sherman, D.; Emami, B.; Piro, A.J.; Hellman, S.

    1979-01-01

    The requirement for a certificate-of-need for capital expenditures of $100,000 or more has placed a major constraint on purchases of new medical equipment. Consideration of a first principles argument has not proven compelling to the planning agencies in justifying the purchase of a radiation therapy simulator. Thus a strategy based on cost-effectiveness and the consequences of survival in successfully treated patients is proposed for equipment justification. We have reviewed the records of 18-month survivors among patients with lung cancer that were treated by irradiation; we observed 3 spinal cord injuries in non-simulated patients, whereas none were observed in patients who had the benefit of simulation. Considering the societal costs of spinal cord injury, a cost-benefit analysis of a simulator justifies the expense of this equipment

  4. Model Selection in Historical Research Using Approximate Bayesian Computation

    Science.gov (United States)

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  5. An Active Lattice Model in a Bayesian Framework

    DEFF Research Database (Denmark)

    Carstensen, Jens Michael

    1996-01-01

    A Markov Random Field is used as a structural model of a deformable rectangular lattice. When used as a template prior in a Bayesian framework this model is powerful for making inferences about lattice structures in images. The model assigns maximum probability to the perfect regular lattice...... by penalizing deviations in alignment and lattice node distance. The Markov random field represents prior knowledge about the lattice structure, and through an observation model that incorporates the visual appearance of the nodes, we can simulate realizations from the posterior distribution. A maximum...... a posteriori (MAP) estimate, found by simulated annealing, is used as the reconstructed lattice. The model was developed as a central part of an algorithm for automatic analylsis of genetic experiments, positioned in a lattice structure by a robot. The algorithm has been successfully applied to many images...

  6. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  7. Uncovering Transcriptional Regulatory Networks by Sparse Bayesian Factor Model

    Directory of Open Access Journals (Sweden)

    Qi Yuan(Alan

    2010-01-01

    Full Text Available Abstract The problem of uncovering transcriptional regulation by transcription factors (TFs based on microarray data is considered. A novel Bayesian sparse correlated rectified factor model (BSCRFM is proposed that models the unknown TF protein level activity, the correlated regulations between TFs, and the sparse nature of TF-regulated genes. The model admits prior knowledge from existing database regarding TF-regulated target genes based on a sparse prior and through a developed Gibbs sampling algorithm, a context-specific transcriptional regulatory network specific to the experimental condition of the microarray data can be obtained. The proposed model and the Gibbs sampling algorithm were evaluated on the simulated systems, and results demonstrated the validity and effectiveness of the proposed approach. The proposed model was then applied to the breast cancer microarray data of patients with Estrogen Receptor positive ( status and Estrogen Receptor negative ( status, respectively.

  8. MODELING INFORMATION SYSTEM AVAILABILITY BY USING BAYESIAN BELIEF NETWORK APPROACH

    Directory of Open Access Journals (Sweden)

    Semir Ibrahimović

    2016-03-01

    Full Text Available Modern information systems are expected to be always-on by providing services to end-users, regardless of time and location. This is particularly important for organizations and industries where information systems support real-time operations and mission-critical applications that need to be available on 24  7  365 basis. Examples of such entities include process industries, telecommunications, healthcare, energy, banking, electronic commerce and a variety of cloud services. This article presents a modified Bayesian Belief Network model for predicting information system availability, introduced initially by Franke, U. and Johnson, P. (in article “Availability of enterprise IT systems – an expert based Bayesian model”. Software Quality Journal 20(2, 369-394, 2012 based on a thorough review of several dimensions of the information system availability, we proposed a modified set of determinants. The model is parameterized by using probability elicitation process with the participation of experts from the financial sector of Bosnia and Herzegovina. The model validation was performed using Monte Carlo simulation.

  9. Cost-effectiveness of multidisciplinary care in mild to moderate chronic kidney disease in the United States: A modeling study

    Science.gov (United States)

    Malcolm, Elizabeth; Goldhaber-Fiebert, Jeremy D.

    2018-01-01

    Background Multidisciplinary care (MDC) programs have been proposed as a way to alleviate the cost and morbidity associated with chronic kidney disease (CKD) in the US. Methods and findings We assessed the cost-effectiveness of a theoretical Medicare-based MDC program for CKD compared to usual CKD care in Medicare beneficiaries with stage 3 and 4 CKD between 45 and 84 years old in the US. The program used nephrologists, advanced practitioners, educators, dieticians, and social workers. From Medicare claims and published literature, we developed a novel deterministic Markov model for CKD progression and calibrated it to long-term risks of mortality and progression to end-stage renal disease. We then used the model to project accrued discounted costs and quality-adjusted life years (QALYs) over patients’ remaining lifetime. We estimated the incremental cost-effectiveness ratio (ICER) of MDC, or the cost of the intervention per QALY gained. MDC added 0.23 (95% CI: 0.08, 0.42) QALYs over usual care, costing $51,285 per QALY gained (net monetary benefit of $23,100 at a threshold of $150,000 per QALY gained; 95% CI: $6,252, $44,323). In all subpopulations analyzed, ICERs ranged from $42,663 to $72,432 per QALY gained. MDC was generally more cost-effective in patients with higher urine albumin excretion. Although ICERs were higher in younger patients, MDC could yield greater improvements in health in younger than older patients. MDC remained cost-effective when we decreased its effectiveness to 25% of the base case or increased the cost 5-fold. The program costed less than $70,000 per QALY in 95% of probabilistic sensitivity analyses and less than $87,500 per QALY in 99% of analyses. Limitations of our study include its theoretical nature and being less generalizable to populations at low risk for progression to ESRD. We did not study the potential impact of MDC on hospitalization (cardiovascular or other). Conclusions Our model estimates that a Medicare-funded MDC

  10. Cost-effectiveness of multidisciplinary care in mild to moderate chronic kidney disease in the United States: A modeling study.

    Directory of Open Access Journals (Sweden)

    Eugene Lin

    2018-03-01

    Full Text Available Multidisciplinary care (MDC programs have been proposed as a way to alleviate the cost and morbidity associated with chronic kidney disease (CKD in the US.We assessed the cost-effectiveness of a theoretical Medicare-based MDC program for CKD compared to usual CKD care in Medicare beneficiaries with stage 3 and 4 CKD between 45 and 84 years old in the US. The program used nephrologists, advanced practitioners, educators, dieticians, and social workers. From Medicare claims and published literature, we developed a novel deterministic Markov model for CKD progression and calibrated it to long-term risks of mortality and progression to end-stage renal disease. We then used the model to project accrued discounted costs and quality-adjusted life years (QALYs over patients' remaining lifetime. We estimated the incremental cost-effectiveness ratio (ICER of MDC, or the cost of the intervention per QALY gained. MDC added 0.23 (95% CI: 0.08, 0.42 QALYs over usual care, costing $51,285 per QALY gained (net monetary benefit of $23,100 at a threshold of $150,000 per QALY gained; 95% CI: $6,252, $44,323. In all subpopulations analyzed, ICERs ranged from $42,663 to $72,432 per QALY gained. MDC was generally more cost-effective in patients with higher urine albumin excretion. Although ICERs were higher in younger patients, MDC could yield greater improvements in health in younger than older patients. MDC remained cost-effective when we decreased its effectiveness to 25% of the base case or increased the cost 5-fold. The program costed less than $70,000 per QALY in 95% of probabilistic sensitivity analyses and less than $87,500 per QALY in 99% of analyses. Limitations of our study include its theoretical nature and being less generalizable to populations at low risk for progression to ESRD. We did not study the potential impact of MDC on hospitalization (cardiovascular or other.Our model estimates that a Medicare-funded MDC program could reduce the need for

  11. The cost-effectiveness of neonatal screening for Cystic Fibrosis: an analysis of alternative scenarios using a decision model

    Directory of Open Access Journals (Sweden)

    Tu Karen

    2005-08-01

    Full Text Available Abstract Background The use of neonatal screening for cystic fibrosis is widely debated in the United Kingdom and elsewhere, but the evidence available to inform policy is limited. This paper explores the cost-effectiveness of adding screening for cystic fibrosis to an existing routine neonatal screening programme for congenital hypothyroidism and phenylketonuria, under alternative scenarios and assumptions. Methods The study is based on a decision model comparing screening to no screening in terms of a number of outcome measures, including diagnosis of cystic fibrosis, life-time treatment costs, life years and QALYs gained. The setting is a hypothetical UK health region without an existing neonatal screening programme for cystic fibrosis. Results Under initial assumptions, neonatal screening (using an immunoreactive trypsin/DNA two stage screening protocol costs £5,387 per infant diagnosed, or £1.83 per infant screened (1998 costs. Neonatal screening for cystic fibrosis produces an incremental cost-effectiveness of £6,864 per QALY gained, in our base case scenario (an assumed benefit of a 6 month delay in the emergence of symptoms. A difference of 11 months or more in the emergence of symptoms (and mean survival means neonatal screening is both less costly and produces better outcomes than no screening. Conclusion Neonatal screening is expensive as a method of diagnosis. Neonatal screening may be a cost-effective intervention if the hypothesised delays in the onset of symptoms are confirmed. Implementing both antenatal and neonatal screening would undermine potential economic benefits, since a reduction in the birth incidence of cystic fibrosis would reduce the cost-effectiveness of neonatal screening.

  12. Cost Effectiveness of Childhood Cochlear Implantation and Deaf Education in Nicaragua: A Disability Adjusted Life Year Model.

    Science.gov (United States)

    Saunders, James E; Barrs, David M; Gong, Wenfeng; Wilson, Blake S; Mojica, Karen; Tucci, Debara L

    2015-09-01

    Cochlear implantation (CI) is a common intervention for severe-to-profound hearing loss in high-income countries, but is not commonly available to children in low resource environments. Owing in part to the device costs, CI has been assumed to be less economical than deaf education for low resource countries. The purpose of this study is to compare the cost effectiveness of the two interventions for children with severe-to-profound sensorineural hearing loss (SNHL) in a model using disability adjusted life years (DALYs). Cost estimates were derived from published data, expert opinion, and known costs of services in Nicaragua. Individual costs and lifetime DALY estimates with a 3% discounting rate were applied to both two interventions. Sensitivity analysis was implemented to evaluate the effect on the discounted cost of five key components: implant cost, audiology salary, speech therapy salary, number of children implanted per year, and device failure probability. The costs per DALY averted are $5,898 and $5,529 for CI and deaf education, respectively. Using standards set by the WHO, both interventions are cost effective. Sensitivity analysis shows that when all costs set to maximum estimates, CI is still cost effective. Using a conservative DALY analysis, both CI and deaf education are cost-effective treatment alternatives for severe-to-profound SNHL. CI intervention costs are not only influenced by the initial surgery and device costs but also by rehabilitation costs and the lifetime maintenance, device replacement, and battery costs. The major CI cost differences in this low resource setting were increased initial training and infrastructure costs, but lower medical personnel and surgery costs.

  13. A global economic model to assess the cost-effectiveness of new treatments for advanced breast cancer in Canada.

    Science.gov (United States)

    Beauchemin, C; Letarte, N; Mathurin, K; Yelle, L; Lachaine, J

    2016-06-01

    Objective Considering the increasing number of treatment options for metastatic breast cancer (MBC), it is important to develop high-quality methods to assess the cost-effectiveness of new anti-cancer drugs. This study aims to develop a global economic model that could be used as a benchmark for the economic evaluation of new therapies for MBC. Methods The Global Pharmacoeconomics of Metastatic Breast Cancer (GPMBC) model is a Markov model that was constructed to estimate the incremental cost per quality-adjusted life years (QALY) of new treatments for MBC from a Canadian healthcare system perspective over a lifetime horizon. Specific parameters included in the model are cost of drug treatment, survival outcomes, and incidence of treatment-related adverse events (AEs). Global parameters are patient characteristics, health states utilities, disutilities, and costs associated with treatment-related AEs, as well as costs associated with drug administration, medical follow-up, and end-of-life care. The GPMBC model was tested and validated in a specific context, by assessing the cost-effectiveness of lapatinib plus letrozole compared with other widely used first-line therapies for post-menopausal women with hormone receptor-positive (HR+) and epidermal growth factor receptor 2-positive (HER2+) MBC. Results When tested, the GPMBC model led to incremental cost-utility ratios of CA$131 811 per QALY, CA$56 211 per QALY, and CA$102 477 per QALY for the comparison of lapatinib plus letrozole vs letrozole alone, trastuzumab plus anastrozole, and anastrozole alone, respectively. Results of the model testing were quite similar to those obtained by Delea et al., who also assessed the cost-effectiveness of lapatinib in combination with letrozole in HR+/HER2 + MBC in Canada, thus suggesting that the GPMBC model can replicate results of well-conducted economic evaluations. Conclusions The GPMBC model can be very valuable as it allows a quick and valid assessment of the cost-effectiveness

  14. Multimethod, multistate Bayesian hierarchical modeling approach for use in regional monitoring of wolves.

    Science.gov (United States)

    Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente

    2016-08-01

    In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population

  15. Bayesian Model Comparison With the g-Prior

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Cemgil, Ali Taylan

    2014-01-01

    ’s asymptotic MAP rule was an improvement, and in this paper we extend the work by Djuric in several ways. Specifically, we consider the elicitation of proper prior distributions, treat the case of real- and complex-valued data simultaneously in a Bayesian framework similar to that considered by Djuric......, and develop new model selection rules for a regression model containing both linear and non-linear parameters. Moreover, we use this framework to give a new interpretation of the popular information criteria and relate their performance to the signal-to-noise ratio of the data. By use of simulations, we also...... demonstrate that our proposed model comparison and selection rules outperform the traditional information criteria both in terms of detecting the true model and in terms of predicting unobserved data. The simulation code is available online....

  16. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  17. Bayesian Dose-Response Modeling in Sparse Data

    Science.gov (United States)

    Kim, Steven B.

    This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a

  18. Costs and cost effectiveness of different strategies for chlamydia screening and partner notification: an economic and mathematical modelling study.

    Science.gov (United States)

    Turner, Katy; Adams, Elisabeth; Grant, Arabella; Macleod, John; Bell, Gill; Clarke, Jan; Horner, Paddy

    2011-01-04

    To compare the cost, cost effectiveness, and sex equity of different intervention strategies within the English National Chlamydia Screening Programme. To develop a tool for calculating cost effectiveness of chlamydia control programmes at a local, national, or international level. An economic and mathematical modelling study with cost effectiveness analysis. Costs were restricted to those of screening and partner notification from the perspective of the NHS and excluded patient costs, the costs of reinfection, and costs of complications arising from initial infection. England. Population Individuals eligible for the National Chlamydia Screening Programme. Cost effectiveness of National Chlamydia Screening Programme in 2008-9 (as cost per individual tested, cost per positive diagnosis, total cost of screening, number screened, number infected, sex ratio of those tested and treated). Comparison of baseline programme with two different interventions-(i) increased coverage of primary screening in men and (ii) increased efficacy of partner notification. In 2008-9 screening was estimated to cost about £46.3m in total and £506 per infection treated. Provision for partner notification within the screening programme cost between £9 and £27 per index case, excluding treatment and testing. The model results suggest that increasing male screening coverage from 8% (baseline value) to 24% (to match female coverage) would cost an extra £22.9m and increase the cost per infection treated to £528. In contrast, increasing partner notification efficacy from 0.4 (baseline value) to 0.8 partners per index case would cost an extra £3.3m and would reduce the cost per infection diagnosed to £449. Increasing screening coverage to 24% in men would cost over six times as much as increasing partner notification to 0.8 but only treat twice as many additional infections. In the English National Chlamydia Screening Programme increasing the effectiveness of partner notification is likely

  19. Bayesian uncertainty analysis with applications to turbulence modeling

    International Nuclear Information System (INIS)

    Cheung, Sai Hung; Oliver, Todd A.; Prudencio, Ernesto E.; Prudhomme, Serge; Moser, Robert D.

    2011-01-01

    In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.

  20. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    Science.gov (United States)

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  1. Bayesian energy landscape tilting: towards concordant models of molecular ensembles.

    Science.gov (United States)

    Beauchamp, Kyle A; Pande, Vijay S; Das, Rhiju

    2014-03-18

    Predicting biological structure has remained challenging for systems such as disordered proteins that take on myriad conformations. Hybrid simulation/experiment strategies have been undermined by difficulties in evaluating errors from computational model inaccuracies and data uncertainties. Building on recent proposals from maximum entropy theory and nonequilibrium thermodynamics, we address these issues through a Bayesian energy landscape tilting (BELT) scheme for computing Bayesian hyperensembles over conformational ensembles. BELT uses Markov chain Monte Carlo to directly sample maximum-entropy conformational ensembles consistent with a set of input experimental observables. To test this framework, we apply BELT to model trialanine, starting from disagreeing simulations with the force fields ff96, ff99, ff99sbnmr-ildn, CHARMM27, and OPLS-AA. BELT incorporation of limited chemical shift and (3)J measurements gives convergent values of the peptide's α, β, and PPII conformational populations in all cases. As a test of predictive power, all five BELT hyperensembles recover set-aside measurements not used in the fitting and report accurate errors, even when starting from highly inaccurate simulations. BELT's principled framework thus enables practical predictions for complex biomolecular systems from discordant simulations and sparse data. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Sparse linear models: Variational approximate inference and Bayesian experimental design

    International Nuclear Information System (INIS)

    Seeger, Matthias W

    2009-01-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  3. Sparse linear models: Variational approximate inference and Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)

    2009-12-01

    A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.

  4. MATHEMATICAL RISK ANALYSIS: VIA NICHOLAS RISK MODEL AND BAYESIAN ANALYSIS

    Directory of Open Access Journals (Sweden)

    Anass BAYAGA

    2010-07-01

    Full Text Available The objective of this second part of a two-phased study was to explorethe predictive power of quantitative risk analysis (QRA method andprocess within Higher Education Institution (HEI. The method and process investigated the use impact analysis via Nicholas risk model and Bayesian analysis, with a sample of hundred (100 risk analysts in a historically black South African University in the greater Eastern Cape Province.The first findings supported and confirmed previous literature (KingIII report, 2009: Nicholas and Steyn, 2008: Stoney, 2007: COSA, 2004 that there was a direct relationship between risk factor, its likelihood and impact, certiris paribus. The second finding in relation to either controlling the likelihood or the impact of occurrence of risk (Nicholas risk model was that to have a brighter risk reward, it was important to control the likelihood ofoccurrence of risks as compared with its impact so to have a direct effect on entire University. On the Bayesian analysis, thus third finding, the impact of risk should be predicted along three aspects. These aspects included the human impact (decisions made, the property impact (students and infrastructural based and the business impact. Lastly, the study revealed that although in most business cases, where as business cycles considerably vary dependingon the industry and or the institution, this study revealed that, most impacts in HEI (University was within the period of one academic.The recommendation was that application of quantitative risk analysisshould be related to current legislative framework that affects HEI.

  5. Bayesian Age-Period-Cohort Model of Lung Cancer Mortality

    Directory of Open Access Journals (Sweden)

    Bhikhari P. Tharu

    2015-09-01

    Full Text Available Background The objective of this study was to analyze the time trend for lung cancer mortality in the population of the USA by 5 years based on most recent available data namely to 2010. The knowledge of the mortality rates in the temporal trends is necessary to understand cancer burden.Methods Bayesian Age-Period-Cohort model was fitted using Poisson regression with histogram smoothing prior to decompose mortality rates based on age at death, period at death, and birth-cohort.Results Mortality rates from lung cancer increased more rapidly from age 52 years. It ended up to 325 deaths annually for 82 years on average. The mortality of younger cohorts was lower than older cohorts. The risk of lung cancer was lowered from period 1993 to recent periods.Conclusions The fitted Bayesian Age-Period-Cohort model with histogram smoothing prior is capable of explaining mortality rate of lung cancer. The reduction in carcinogens in cigarettes and increase in smoking cessation from around 1960 might led to decreasing trend of lung cancer mortality after calendar period 1993.

  6. Bayesian modeling of recombination events in bacterial populations

    Directory of Open Access Journals (Sweden)

    Dowson Chris

    2008-10-01

    Full Text Available Abstract Background We consider the discovery of recombinant segments jointly with their origins within multilocus DNA sequences from bacteria representing heterogeneous populations of fairly closely related species. The currently available methods for recombination detection capable of probabilistic characterization of uncertainty have a limited applicability in practice as the number of strains in a data set increases. Results We introduce a Bayesian spatial structural model representing the continuum of origins over sites within the observed sequences, including a probabilistic characterization of uncertainty related to the origin of any particular site. To enable a statistically accurate and practically feasible approach to the analysis of large-scale data sets representing a single genus, we have developed a novel software tool (BRAT, Bayesian Recombination Tracker implementing the model and the corresponding learning algorithm, which is capable of identifying the posterior optimal structure and to estimate the marginal posterior probabilities of putative origins over the sites. Conclusion A multitude of challenging simulation scenarios and an analysis of real data from seven housekeeping genes of 120 strains of genus Burkholderia are used to illustrate the possibilities offered by our approach. The software is freely available for download at URL http://web.abo.fi/fak/mnf//mate/jc/software/brat.html.

  7. Cost Effectiveness of the Instrumentalism in Occupational Therapy (IOT) Conceptual Model as a Guide for Intervention with Adolescents with Emotional and Behavioral Disorders (EBD)

    Science.gov (United States)

    Ikiugu, Moses N.; Anderson, Lynne

    2007-01-01

    The purpose of this paper was to demonstrate the cost-effectiveness of using the Instrumentalism in Occupational Therapy (IOT) conceptual practice model as a guide for intervention to assist teenagers with emotional and behavioral disorders (EBD) transition successfully into adulthood. The cost effectiveness analysis was based on a project…

  8. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    NARCIS (Netherlands)

    Postma, Maarten J.; Jit, Mark; Rozenbaum, Mark H.; Standaert, Baudouin; Tu, Hong-Anh; Hutubessy, Raymond C. W.

    2011-01-01

    Background: This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods: We identified

  9. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    Science.gov (United States)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  10. The Cost-Effectiveness of Low-Cost Essential Antihypertensive Medicines for Hypertension Control in China: A Modelling Study.

    Directory of Open Access Journals (Sweden)

    Dongfeng Gu

    2015-08-01

    Full Text Available Hypertension is China's leading cardiovascular disease risk factor. Improved hypertension control in China would result in result in enormous health gains in the world's largest population. A computer simulation model projected the cost-effectiveness of hypertension treatment in Chinese adults, assuming a range of essential medicines list drug costs.The Cardiovascular Disease Policy Model-China, a Markov-style computer simulation model, simulated hypertension screening, essential medicines program implementation, hypertension control program administration, drug treatment and monitoring costs, disease-related costs, and quality-adjusted life years (QALYs gained by preventing cardiovascular disease or lost because of drug side effects in untreated hypertensive adults aged 35-84 y over 2015-2025. Cost-effectiveness was assessed in cardiovascular disease patients (secondary prevention and for two blood pressure ranges in primary prevention (stage one, 140-159/90-99 mm Hg; stage two, ≥160/≥100 mm Hg. Treatment of isolated systolic hypertension and combined systolic and diastolic hypertension were modeled as a reduction in systolic blood pressure; treatment of isolated diastolic hypertension was modeled as a reduction in diastolic blood pressure. One-way and probabilistic sensitivity analyses explored ranges of antihypertensive drug effectiveness and costs, monitoring frequency, medication adherence, side effect severity, background hypertension prevalence, antihypertensive medication treatment, case fatality, incidence and prevalence, and cardiovascular disease treatment costs. Median antihypertensive costs from Shanghai and Yunnan province were entered into the model in order to estimate the effects of very low and high drug prices. Incremental cost-effectiveness ratios less than the per capita gross domestic product of China (11,900 international dollars [Int$] in 2015 were considered cost-effective. Treating hypertensive adults with prior

  11. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  12. Aggregated Residential Load Modeling Using Dynamic Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulou, Maria; Chin, George; Fuller, Jason C.; Lu, Shuai

    2014-09-28

    Abstract—It is already obvious that the future power grid will have to address higher demand for power and energy, and to incorporate renewable resources of different energy generation patterns. Demand response (DR) schemes could successfully be used to manage and balance power supply and demand under operating conditions of the future power grid. To achieve that, more advanced tools for DR management of operations and planning are necessary that can estimate the available capacity from DR resources. In this research, a Dynamic Bayesian Network (DBN) is derived, trained, and tested that can model aggregated load of Heating, Ventilation, and Air Conditioning (HVAC) systems. DBNs can provide flexible and powerful tools for both operations and planing, due to their unique analytical capabilities. The DBN model accuracy and flexibility of use is demonstrated by testing the model under different operational scenarios.

  13. A Bayesian Analysis of Unobserved Component Models Using Ox

    Directory of Open Access Journals (Sweden)

    Charles S. Bos

    2011-05-01

    Full Text Available This article details a Bayesian analysis of the Nile river flow data, using a similar state space model as other articles in this volume. For this data set, Metropolis-Hastings and Gibbs sampling algorithms are implemented in the programming language Ox. These Markov chain Monte Carlo methods only provide output conditioned upon the full data set. For filtered output, conditioning only on past observations, the particle filter is introduced. The sampling methods are flexible, and this advantage is used to extend the model to incorporate a stochastic volatility process. The volatility changes both in the Nile data and also in daily S&P 500 return data are investigated. The posterior density of parameters and states is found to provide information on which elements of the model are easily identifiable, and which elements are estimated with less precision.

  14. Fast Bayesian Inference in Dirichlet Process Mixture Models.

    Science.gov (United States)

    Wang, Lianming; Dunson, David B

    2011-01-01

    There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.

  15. Model-based dispersive wave processing: A recursive Bayesian solution

    International Nuclear Information System (INIS)

    Candy, J.V.; Chambers, D.H.

    1999-01-01

    Wave propagation through dispersive media represents a significant problem in many acoustic applications, especially in ocean acoustics, seismology, and nondestructive evaluation. In this paper we propose a propagation model that can easily represent many classes of dispersive waves and proceed to develop the model-based solution to the wave processing problem. It is shown that the underlying wave system is nonlinear and time-variable requiring a recursive processor. Thus the general solution to the model-based dispersive wave enhancement problem is developed using a Bayesian maximum a posteriori (MAP) approach and shown to lead to the recursive, nonlinear extended Kalman filter (EKF) processor. The problem of internal wave estimation is cast within this framework. The specific processor is developed and applied to data synthesized by a sophisticated simulator demonstrating the feasibility of this approach. copyright 1999 Acoustical Society of America.

  16. Advances in Bayesian Model Based Clustering Using Particle Learning

    Energy Technology Data Exchange (ETDEWEB)

    Merl, D M

    2009-11-19

    Recent work by Carvalho, Johannes, Lopes and Polson and Carvalho, Lopes, Polson and Taddy introduced a sequential Monte Carlo (SMC) alternative to traditional iterative Monte Carlo strategies (e.g. MCMC and EM) for Bayesian inference for a large class of dynamic models. The basis of SMC techniques involves representing the underlying inference problem as one of state space estimation, thus giving way to inference via particle filtering. The key insight of Carvalho et al was to construct the sequence of filtering distributions so as to make use of the posterior predictive distribution of the observable, a distribution usually only accessible in certain Bayesian settings. Access to this distribution allows a reversal of the usual propagate and resample steps characteristic of many SMC methods, thereby alleviating to a large extent many problems associated with particle degeneration. Furthermore, Carvalho et al point out that for many conjugate models the posterior distribution of the static variables can be parametrized in terms of [recursively defined] sufficient statistics of the previously observed data. For models where such sufficient statistics exist, particle learning as it is being called, is especially well suited for the analysis of streaming data do to the relative invariance of its algorithmic complexity with the number of data observations. Through a particle learning approach, a statistical model can be fit to data as the data is arriving, allowing at any instant during the observation process direct quantification of uncertainty surrounding underlying model parameters. Here we describe the use of a particle learning approach for fitting a standard Bayesian semiparametric mixture model as described in Carvalho, Lopes, Polson and Taddy. In Section 2 we briefly review the previously presented particle learning algorithm for the case of a Dirichlet process mixture of multivariate normals. In Section 3 we describe several novel extensions to the original

  17. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Estimating the Cost-Effectiveness of HIV Prevention Programmes in Vietnam, 2006-2010: A Modelling Study.

    Directory of Open Access Journals (Sweden)

    Quang Duy Pham

    Full Text Available Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness.We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY.Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300-68,900] new infections and 42,600 [36,100-54,100] deaths, resulting in 401,600 [312,200-496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447-2,747], US$2,344 [1,843-2,765], and US$248 [201-319] for each averted infection, death, and DALY, respectively.Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden.

  19. Estimating the Cost-Effectiveness of HIV Prevention Programmes in Vietnam, 2006-2010: A Modelling Study

    Science.gov (United States)

    Pham, Quang Duy; Wilson, David P.; Kerr, Cliff C.; Shattock, Andrew J.; Do, Hoa Mai; Duong, Anh Thuy; Nguyen, Long Thanh; Zhang, Lei

    2015-01-01

    Introduction Vietnam has been largely reliant on international support in its HIV response. Over 2006-2010, a total of US$480 million was invested in its HIV programmes, more than 70% of which came from international sources. This study investigates the potential epidemiological impacts of these programmes and their cost-effectiveness. Methods We conducted a data synthesis of HIV programming, spending, epidemiological, and clinical outcomes. Counterfactual scenarios were defined based on assumed programme coverage and behaviours had the programmes not been implemented. An epidemiological model, calibrated to reflect the actual epidemiological trends, was used to estimate plausible ranges of programme impacts. The model was then used to estimate the costs per averted infection, death, and disability adjusted life-year (DALY). Results Based on observed prevalence reductions amongst most population groups, and plausible counterfactuals, modelling suggested that antiretroviral therapy (ART) and prevention programmes over 2006-2010 have averted an estimated 50,600 [95% uncertainty bound: 36,300–68,900] new infections and 42,600 [36,100–54,100] deaths, resulting in 401,600 [312,200–496,300] fewer DALYs across all population groups. HIV programmes in Vietnam have cost an estimated US$1,972 [1,447–2,747], US$2,344 [1,843–2,765], and US$248 [201–319] for each averted infection, death, and DALY, respectively. Conclusions Our evaluation suggests that HIV programmes in Vietnam have most likely had benefits that are cost-effective. ART and direct HIV prevention were the most cost-effective interventions in reducing HIV disease burden. PMID:26196290

  20. Bayesian Geostatistical Modeling of Malaria Indicator Survey Data in Angola

    Science.gov (United States)

    Gosoniu, Laura; Veta, Andre Mia; Vounatsou, Penelope

    2010-01-01

    The 2006–2007 Angola Malaria Indicator Survey (AMIS) is the first nationally representative household survey in the country assessing coverage of the key malaria control interventions and measuring malaria-related burden among children under 5 years of age. In this paper, the Angolan MIS data were analyzed to produce the first smooth map of parasitaemia prevalence based on contemporary nationwide empirical data in the country. Bayesian geostatistical models were fitted to assess the effect of interventions after adjusting for environmental, climatic and socio-economic factors. Non-linear relationships between parasitaemia risk and environmental predictors were modeled by categorizing the covariates and by employing two non-parametric approaches, the B-splines and the P-splines. The results of the model validation showed that the categorical model was able to better capture the relationship between parasitaemia prevalence and the environmental factors. Model fit and prediction were handled within a Bayesian framework using Markov chain Monte Carlo (MCMC) simulations. Combining estimates of parasitaemia prevalence with the number of children under we obtained estimates of the number of infected children in the country. The population-adjusted prevalence ranges from in Namibe province to in Malanje province. The odds of parasitaemia in children living in a household with at least ITNs per person was by 41% lower (CI: 14%, 60%) than in those with fewer ITNs. The estimates of the number of parasitaemic children produced in this paper are important for planning and implementing malaria control interventions and for monitoring the impact of prevention and control activities. PMID:20351775

  1. The impact and cost-effectiveness of nonavalent HPV vaccination in the United States: Estimates from a simplified transmission model

    OpenAIRE

    Chesson, Harrell W.; Markowitz, Lauri E.; Hariri, Susan; Ekwueme, Donatus U.; Saraiya, Mona

    2016-01-01

    Introduction: The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. Methods: We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined w...

  2. Randomized evaluation and cost-effectiveness of HIV and sexual and reproductive health service referral and linkage models in Zambia

    Directory of Open Access Journals (Sweden)

    Paul C. Hewett

    2016-08-01

    Full Text Available Abstract Background Provision of HIV prevention and sexual and reproductive health services in Zambia is largely characterized by discrete service provision with weak client referral and linkage. The literature reveals gaps in the continuity of care for HIV and sexual and reproductive health. This study assessed whether improved service delivery models increased the uptake and cost-effectiveness of HIV and sexual and reproductive health services. Methods Adult clients 18+ years of age accessing family planning (females, HIV testing and counseling (females and males, and male circumcision services (males were recruited, enrolled and individually randomized to one of three study arms: 1 the standard model of service provision at the entry point (N = 1319; 2 an enhanced counseling and referral to add-on service with follow-up (N = 1323; and 3 the components of study arm two, with the additional offer of an escort (N = 1321. Interviews were conducted with the same clients at baseline, six weeks and six months. Uptake of services for HIV, family planning, male circumcision, and cervical cancer screening at six weeks and six months were the primary endpoints. Pairwise chi-square and multivariable logistic regression statistical tests assessed differences across study arms, which were also assessed for incremental cost-efficiency and cost-effectiveness. Results A total of 3963 clients, 1920 males and 2043 females, were enrolled; 82 % of participants at six weeks were tracked and 81 % at six months; follow-up rates did not vary significantly by study arm. The odds of clients accessing HIV testing and counseling, cervical cancer screening services among females, and circumcision services among males varied significantly by study arm at six weeks and six months; less consistent findings were observed for HIV care and treatment. Client uptake of family planning services did not vary significantly by study arm. Integrated services were found

  3. Modeling Land-Use Decision Behavior with Bayesian Belief Networks

    Directory of Open Access Journals (Sweden)

    Inge Aalders

    2008-06-01

    Full Text Available The ability to incorporate and manage the different drivers of land-use change in a modeling process is one of the key challenges because they are complex and are both quantitative and qualitative in nature. This paper uses Bayesian belief networks (BBN to incorporate characteristics of land managers in the modeling process and to enhance our understanding of land-use change based on the limited and disparate sources of information. One of the two models based on spatial data represented land managers in the form of a quantitative variable, the area of individual holdings, whereas the other model included qualitative data from a survey of land managers. Random samples from the spatial data provided evidence of the relationship between the different variables, which I used to develop the BBN structure. The model was tested for four different posterior probability distributions, and results showed that the trained and learned models are better at predicting land use than the uniform and random models. The inference from the model demonstrated the constraints that biophysical characteristics impose on land managers; for older land managers without heirs, there is a higher probability of the land use being arable agriculture. The results show the benefits of incorporating a more complex notion of land managers in land-use models, and of using different empirical data sources in the modeling process. Future research should focus on incorporating more complex social processes into the modeling structure, as well as incorporating spatio-temporal dynamics in a BBN.

  4. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  5. GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran

  6. A cost-effectiveness analysis of a proactive management strategy for the Sprint Fidelis recall: a probabilistic decision analysis model.

    Science.gov (United States)

    Bashir, Jamil; Cowan, Simone; Raymakers, Adam; Yamashita, Michael; Danter, Matthew; Krahn, Andrew; Lynd, Larry D

    2013-12-01

    The management of the recall is complicated by the competing risks of lead failure and complications that can occur with lead revision. Many of these patients are currently undergoing an elective generator change--an ideal time to consider lead revision. To determine the cost-effectiveness of a proactive management strategy for the Sprint Fidelis recall. We obtained detailed clinical outcomes and costing data from a retrospective analysis of 341 patients who received the Sprint Fidelis lead in British Columbia, where patients younger than 60 years were offered lead extraction when undergoing generator replacement. These population-based data were used to construct and populate a probabilistic Markov model in which a proactive management strategy was compared to a conservative strategy to determine the incremental cost per lead failure avoided. In our population, elective lead revisions were half the cost of emergent revisions and had a lower complication rate. In the model, the incremental cost-effectiveness ratio of proactive lead revision versus a recommended monitoring strategy was $12,779 per lead failure avoided. The proactive strategy resulted in 21 fewer failures per 100 patients treated and reduced the chance of an additional complication from an unexpected surgery. Cost-effectiveness analysis suggests that prospective lead revision should be considered when patients with a Sprint Fidelis lead present for pulse generator change. Elective revision of the lead is justified even when 25% of the population is operated on per year, and in some scenarios, it is both less costly and provides a better outcome. © 2013 Heart Rhythm Society Published by Heart Rhythm Society All rights reserved.

  7. Modelling of population dynamics of red king crab using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Bakanev Sergey ...

    2012-10-01

    Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.

  8. Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It

    NARCIS (Netherlands)

    Grünwald, P.; van Ommen, T.

    2017-01-01

    We empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data are

  9. Dynamic Bayesian Network Modeling of Game Based Diagnostic Assessments. CRESST Report 837

    Science.gov (United States)

    Levy, Roy

    2014-01-01

    Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…

  10. Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it

    NARCIS (Netherlands)

    P.D. Grünwald (Peter); T. van Ommen (Thijs)

    2017-01-01

    textabstractWe empirically show that Bayesian inference can be inconsistent under misspecification in simple linear regression problems, both in a model averaging/selection and in a Bayesian ridge regression setting. We use the standard linear model, which assumes homoskedasticity, whereas the data

  11. BDgraph: An R Package for Bayesian Structure Learning in Graphical Models

    NARCIS (Netherlands)

    Mohammadi, A.; Wit, E.C.

    2017-01-01

    Graphical models provide powerful tools to uncover complicated patterns in multivariate data and are commonly used in Bayesian statistics and machine learning. In this paper, we introduce an R package BDgraph which performs Bayesian structure learning for general undirected graphical models with

  12. [Threshold value for reimbursement of costs of new drugs: cost-effectiveness research and modelling are essential links].

    Science.gov (United States)

    Frederix, Geert W J; Hövels, Anke M; Severens, Johan L; Raaijmakers, Jan A M; Schellens, Jan H M

    2015-01-01

    There is increasing discussion in the Netherlands about the introduction of a threshold value for the costs per extra year of life when reimbursing costs of new drugs. The Medicines Committee ('Commissie Geneesmiddelen'), a division of the Netherlands National Healthcare Institute ('Zorginstituut Nederland'), advises on reimbursement of costs of new drugs. This advice is based upon the determination of therapeutic value of the drug and the results of economic evaluations. Mathematical models that predict future costs and effectiveness are often used in economic evaluations; these models can vary greatly in transparency and quality due to author assumptions. Standardisation of cost-effectiveness models is one solution to overcome the unwanted variation in quality. Discussions about the introduction of a threshold value can only be meaningful if all involved are adequately informed, and by high quality in cost-effectiveness research and, particularly, economic evaluations. Collaboration and discussion between medical specialists, patients or patient organisations, health economists and policy makers, both in development of methods and in standardisation, are essential to improve the quality of decision making.

  13. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  14. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  15. Bayesian network models for error detection in radiotherapy plans

    International Nuclear Information System (INIS)

    Kalet, Alan M; Ford, Eric C; Phillips, Mark H; Gennari, John H

    2015-01-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures. (paper)

  16. 'Traffic-light' nutrition labelling and 'junk-food' tax: a modelled comparison of cost-effectiveness for obesity prevention.

    Science.gov (United States)

    Sacks, G; Veerman, J L; Moodie, M; Swinburn, B

    2011-07-01

    Cost-effectiveness analyses are important tools in efforts to prioritise interventions for obesity prevention. Modelling facilitates evaluation of multiple scenarios with varying assumptions. This study compares the cost-effectiveness of conservative scenarios for two commonly proposed policy-based interventions: front-of-pack 'traffic-light' nutrition labelling (traffic-light labelling) and a tax on unhealthy foods ('junk-food' tax). For traffic-light labelling, estimates of changes in energy intake were based on an assumed 10% shift in consumption towards healthier options in four food categories (breakfast cereals, pastries, sausages and preprepared meals) in 10% of adults. For the 'junk-food' tax, price elasticities were used to estimate a change in energy intake in response to a 10% price increase in seven food categories (including soft drinks, confectionery and snack foods). Changes in population weight and body mass index by sex were then estimated based on these changes in population energy intake, along with subsequent impacts on disability-adjusted life years (DALYs). Associated resource use was measured and costed using pathway analysis, based on a health sector perspective (with some industry costs included). Costs and health outcomes were discounted at 3%. The cost-effectiveness of each intervention was modelled for the 2003 Australian adult population. Both interventions resulted in reduced mean weight (traffic-light labelling: 1.3 kg (95% uncertainty interval (UI): 1.2; 1.4); 'junk-food' tax: 1.6 kg (95% UI: 1.5; 1.7)); and DALYs averted (traffic-light labelling: 45,100 (95% UI: 37,700; 60,100); 'junk-food' tax: 559,000 (95% UI: 459,500; 676,000)). Cost outlays were AUD81 million (95% UI: 44.7; 108.0) for traffic-light labelling and AUD18 million (95% UI: 14.4; 21.6) for 'junk-food' tax. Cost-effectiveness analysis showed both interventions were 'dominant' (effective and cost-saving). Policy-based population-wide interventions such as traffic

  17. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....

  18. Markov modeling for the neurosurgeon: a review of the literature and an introduction to cost-effectiveness research.

    Science.gov (United States)

    Wali, Arvin R; Brandel, Michael G; Santiago-Dieppa, David R; Rennert, Robert C; Steinberg, Jeffrey A; Hirshman, Brian R; Murphy, James D; Khalessi, Alexander A

    2018-05-01

    OBJECTIVE Markov modeling is a clinical research technique that allows competing medical strategies to be mathematically assessed in order to identify the optimal allocation of health care resources. The authors present a review of the recently published neurosurgical literature that employs Markov modeling and provide a conceptual framework with which to evaluate, critique, and apply the findings generated from health economics research. METHODS The PubMed online database was searched to identify neurosurgical literature published from January 2010 to December 2017 that had utilized Markov modeling for neurosurgical cost-effectiveness studies. Included articles were then assessed with regard to year of publication, subspecialty of neurosurgery, decision analytical techniques utilized, and source information for model inputs. RESULTS A total of 55 articles utilizing Markov models were identified across a broad range of neurosurgical subspecialties. Sixty-five percent of the papers were published within the past 3 years alone. The majority of models derived health transition probabilities, health utilities, and cost information from previously published studies or publicly available information. Only 62% of the studies incorporated indirect costs. Ninety-three percent of the studies performed a 1-way or 2-way sensitivity analysis, and 67% performed a probabilistic sensitivity analysis. A review of the conceptual framework of Markov modeling and an explanation of the different terminology and methodology are provided. CONCLUSIONS As neurosurgeons continue to innovate and identify novel treatment strategies for patients, Markov modeling will allow for better characterization of the impact of these interventions on a patient and societal level. The aim of this work is to equip the neurosurgical readership with the tools to better understand, critique, and apply findings produced from cost-effectiveness research.

  19. Large scale Bayesian nuclear data evaluation with consistent model defects

    International Nuclear Information System (INIS)

    Schnabel, G

    2015-01-01

    The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the

  20. Bayesian inference and model comparison for metallic fatigue data

    KAUST Repository

    Babuska, Ivo

    2016-01-06

    In this work, we present a statistical treatment of stress-life (S-N) data drawn from a collection of records of fatigue experiments that were performed on 75S-T6 aluminum alloys. Our main objective is to predict the fatigue life of materials by providing a systematic approach to model calibration, model selection and model ranking with reference to S-N data. To this purpose, we consider fatigue-limit models and random fatigue-limit models that are specially designed to allow the treatment of the run-outs (right-censored data). We first fit the models to the data by maximum likelihood methods and estimate the quantiles of the life distribution of the alloy specimen. We then compare and rank the models by classical measures of fit based on information criteria. We also consider a Bayesian approach that provides, under the prior distribution of the model parameters selected by the user, their simulation-based posterior distributions.

  1. iSEDfit: Bayesian spectral energy distribution modeling of galaxies

    Science.gov (United States)

    Moustakas, John

    2017-08-01

    iSEDfit uses Bayesian inference to extract the physical properties of galaxies from their observed broadband photometric spectral energy distribution (SED). In its default mode, the inputs to iSEDfit are the measured photometry (fluxes and corresponding inverse variances) and a measurement of the galaxy redshift. Alternatively, iSEDfit can be used to estimate photometric redshifts from the input photometry alone. After the priors have been specified, iSEDfit calculates the marginalized posterior probability distributions for the physical parameters of interest, including the stellar mass, star-formation rate, dust content, star formation history, and stellar metallicity. iSEDfit also optionally computes K-corrections and produces multiple "quality assurance" (QA) plots at each stage of the modeling procedure to aid in the interpretation of the prior parameter choices and subsequent fitting results. The software is distributed as part of the impro IDL suite.

  2. A Bayesian modelling framework for tornado occurrences in North America.

    Science.gov (United States)

    Cheng, Vincent Y S; Arhonditsis, George B; Sills, David M L; Gough, William A; Auld, Heather

    2015-03-25

    Tornadoes represent one of nature's most hazardous phenomena that have been responsible for significant destruction and devastating fatalities. Here we present a Bayesian modelling approach for elucidating the spatiotemporal patterns of tornado activity in North America. Our analysis shows a significant increase in the Canadian Prairies and the Northern Great Plains during the summer, indicating a clear transition of tornado activity from the United States to Canada. The linkage between monthly-averaged atmospheric variables and likelihood of tornado events is characterized by distinct seasonality; the convective available potential energy is the predominant factor in the summer; vertical wind shear appears to have a strong signature primarily in the winter and secondarily in the summer; and storm relative environmental helicity is most influential in the spring. The present probabilistic mapping can be used to draw inference on the likelihood of tornado occurrence in any location in North America within a selected time period of the year.

  3. Designing and testing inflationary models with Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Price, Layne C. [Carnegie Mellon Univ., Pittsburgh, PA (United States). Dept. of Physics; Auckland Univ. (New Zealand). Dept. of Physics; Peiris, Hiranya V. [Univ. College London (United Kingdom). Dept. of Physics and Astronomy; Frazer, Jonathan [DESY Hamburg (Germany). Theory Group; Univ. of the Basque Country, Bilbao (Spain). Dept. of Theoretical Physics; Basque Foundation for Science, Bilbao (Spain). IKERBASQUE; Easther, Richard [Auckland Univ. (New Zealand). Dept. of Physics

    2015-11-15

    Even simple inflationary scenarios have many free parameters. Beyond the variables appearing in the inflationary action, these include dynamical initial conditions, the number of fields, and couplings to other sectors. These quantities are often ignored but cosmological observables can depend on the unknown parameters. We use Bayesian networks to account for a large set of inflationary parameters, deriving generative models for the primordial spectra that are conditioned on a hierarchical set of prior probabilities describing the initial conditions, reheating physics, and other free parameters. We use N{sub f}-quadratic inflation as an illustrative example, finding that the number of e-folds N{sub *} between horizon exit for the pivot scale and the end of inflation is typically the most important parameter, even when the number of fields, their masses and initial conditions are unknown, along with possible conditional dependencies between these parameters.

  4. Designing and testing inflationary models with Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Price, Layne C. [McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 (United States); Peiris, Hiranya V. [Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom); Frazer, Jonathan [Deutsches Elektronen-Synchrotron DESY, Theory Group, 22603 Hamburg (Germany); Easther, Richard, E-mail: laynep@andrew.cmu.edu, E-mail: h.peiris@ucl.ac.uk, E-mail: jonathan.frazer@desy.de, E-mail: r.easther@auckland.ac.nz [Department of Physics, University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2016-02-01

    Even simple inflationary scenarios have many free parameters. Beyond the variables appearing in the inflationary action, these include dynamical initial conditions, the number of fields, and couplings to other sectors. These quantities are often ignored but cosmological observables can depend on the unknown parameters. We use Bayesian networks to account for a large set of inflationary parameters, deriving generative models for the primordial spectra that are conditioned on a hierarchical set of prior probabilities describing the initial conditions, reheating physics, and other free parameters. We use N{sub f}-quadratic inflation as an illustrative example, finding that the number of e-folds N{sub *} between horizon exit for the pivot scale and the end of inflation is typically the most important parameter, even when the number of fields, their masses and initial conditions are unknown, along with possible conditional dependencies between these parameters.

  5. Designing and testing inflationary models with Bayesian networks

    International Nuclear Information System (INIS)

    Price, Layne C.; Auckland Univ.; Peiris, Hiranya V.; Frazer, Jonathan; Univ. of the Basque Country, Bilbao; Basque Foundation for Science, Bilbao; Easther, Richard

    2015-11-01

    Even simple inflationary scenarios have many free parameters. Beyond the variables appearing in the inflationary action, these include dynamical initial conditions, the number of fields, and couplings to other sectors. These quantities are often ignored but cosmological observables can depend on the unknown parameters. We use Bayesian networks to account for a large set of inflationary parameters, deriving generative models for the primordial spectra that are conditioned on a hierarchical set of prior probabilities describing the initial conditions, reheating physics, and other free parameters. We use N f -quadratic inflation as an illustrative example, finding that the number of e-folds N * between horizon exit for the pivot scale and the end of inflation is typically the most important parameter, even when the number of fields, their masses and initial conditions are unknown, along with possible conditional dependencies between these parameters.

  6. Forecast Accuracy and Economic Gains from Bayesian Model Averaging using Time Varying Weights

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); R.H. Kleijn (Richard); H.K. van Dijk (Herman); M.J.C.M. Verbeek (Marno)

    2009-01-01

    textabstractSeveral Bayesian model combination schemes, including some novel approaches that simultaneously allow for parameter uncertainty, model uncertainty and robust time varying model weights, are compared in terms of forecast accuracy and economic gains using financial and macroeconomic time

  7. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  8. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  9. Is computer aided detection (CAD) cost effective in screening mammography? A model based on the CADET II study

    Science.gov (United States)

    2011-01-01

    Background Single reading with computer aided detection (CAD) is an alternative to double reading for detecting cancer in screening mammograms. The aim of this study is to investigate whether the use of a single reader with CAD is more cost-effective than double reading. Methods Based on data from the CADET II study, the cost-effectiveness of single reading with CAD versus double reading was measured in terms of cost per cancer detected. Cost (Pound (£), year 2007/08) of single reading with CAD versus double reading was estimated assuming a health and social service perspective and a 7 year time horizon. As the equipment cost varies according to the unit size a separate analysis was conducted for high, average and low volume screening units. One-way sensitivity analyses were performed by varying the reading time, equipment and assessment cost, recall rate and reader qualification. Results CAD is cost increasing for all sizes of screening unit. The introduction of CAD is cost-increasing compared to double reading because the cost of CAD equipment, staff training and the higher assessment cost associated with CAD are greater than the saving in reading costs. The introduction of single reading with CAD, in place of double reading, would produce an additional cost of £227 and £253 per 1,000 women screened in high and average volume units respectively. In low volume screening units, the high cost of purchasing the equipment will results in an additional cost of £590 per 1,000 women screened. One-way sensitivity analysis showed that the factors having the greatest effect on the cost-effectiveness of CAD with single reading compared with double reading were the reading time and the reader's professional qualification (radiologist versus advanced practitioner). Conclusions Without improvements in CAD effectiveness (e.g. a decrease in the recall rate) CAD is unlikely to be a cost effective alternative to double reading for mammography screening in UK. This study

  10. Bayesian modeling of ChIP-chip data using latent variables.

    KAUST Repository

    Wu, Mingqi

    2009-10-26

    BACKGROUND: The ChIP-chip technology has been used in a wide range of biomedical studies, such as identification of human transcription factor binding sites, investigation of DNA methylation, and investigation of histone modifications in animals and plants. Various methods have been proposed in the literature for analyzing the ChIP-chip data, such as the sliding window methods, the hidden Markov model-based methods, and Bayesian methods. Although, due to the integrated consideration of uncertainty of the models and model parameters, Bayesian methods can potentially work better than the other two classes of methods, the existing Bayesian methods do not perform satisfactorily. They usually require multiple replicates or some extra experimental information to parametrize the model, and long CPU time due to involving of MCMC simulations. RESULTS: In this paper, we propose a Bayesian latent model for the ChIP-chip data. The new model mainly differs from the existing Bayesian models, such as the joint deconvolution model, the hierarchical gamma mixture model, and the Bayesian hierarchical model, in two respects. Firstly, it works on the difference between the averaged treatment and control samples. This enables the use of a simple model for the data, which avoids the probe-specific effect and the sample (control/treatment) effect. As a consequence, this enables an efficient MCMC simulation of the posterior distribution of the model, and also makes the model more robust to the outliers. Secondly, it models the neighboring dependence of probes by introducing a latent indicator vector. A truncated Poisson prior distribution is assumed for the latent indicator variable, with the rationale being justified at length. CONCLUSION: The Bayesian latent method is successfully applied to real and ten simulated datasets, with comparisons with some of the existing Bayesian methods, hidden Markov model methods, and sliding window methods. The numerical results indicate that the

  11. A novel Bayesian hierarchical model for road safety hotspot prediction.

    Science.gov (United States)

    Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten

    2017-02-01

    In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our

  12. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    Science.gov (United States)

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  13. Bayesian model ensembling using meta-trained recurrent neural networks

    NARCIS (Netherlands)

    Ambrogioni, L.; Berezutskaya, Y.; Gü ç lü , U.; Borne, E.W.P. van den; Gü ç lü tü rk, Y.; Gerven, M.A.J. van; Maris, E.G.G.

    2017-01-01

    In this paper we demonstrate that a recurrent neural network meta-trained on an ensemble of arbitrary classification tasks can be used as an approximation of the Bayes optimal classifier. This result is obtained by relying on the framework of e-free approximate Bayesian inference, where the Bayesian

  14. A 'cost-effective' probabilistic model to select the dominant factors affecting the variation of the component failure rate

    International Nuclear Information System (INIS)

    Kirchsteiger, C.

    1992-11-01

    Within the framework of a Probabilistic Safety Assessment (PSA), the component failure rate λ is a key parameter in the sense that the study of its behavior gives the essential information for estimating the current values as well as the trends in the failure probabilities of interest. Since there is an infinite variety of possible underlying factors which might cause changes in λ (e.g. operating time, maintenance practices, component environment, etc.), an 'importance ranking' process of these factors is considered most desirable to prioritize research efforts. To be 'cost-effective', the modeling effort must be small, i.e. essentially involving no estimation of additional parameters other than λ. In this paper, using a multivariate data analysis technique and various statistical measures, such a 'cost-effective' screening process has been developed. Dominant factors affecting the failure rate of any components of interest can easily be identified and the appropriateness of current research plans (e.g. on the necessity of performing aging studies) can be validated. (author)

  15. Incorporating Parameter Uncertainty in Bayesian Segmentation Models: Application to Hippocampal Subfield Volumetry

    DEFF Research Database (Denmark)

    Iglesias, J. E.; Sabuncu, M. R.; Van Leemput, Koen

    2012-01-01

    Many successful segmentation algorithms are based on Bayesian models in which prior anatomical knowledge is combined with the available image information. However, these methods typically have many free parameters that are estimated to obtain point estimates only, whereas a faithful Bayesian anal...

  16. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  17. Bayesian linear regression : different conjugate models and their (in)sensitivity to prior-data conflict

    NARCIS (Netherlands)

    Walter, G.M.; Augustin, Th.; Kneib, Thomas; Tutz, Gerhard

    2010-01-01

    The paper is concerned with Bayesian analysis under prior-data conflict, i.e. the situation when observed data are rather unexpected under the prior (and the sample size is not large enough to eliminate the influence of the prior). Two approaches for Bayesian linear regression modeling based on

  18. Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks.

    Science.gov (United States)

    Cook, John; Lewandowsky, Stephan

    2016-01-01

    Belief polarization is said to occur when two people respond to the same evidence by updating their beliefs in opposite directions. This response is considered to be "irrational" because it involves contrary updating, a form of belief updating that appears to violate normatively optimal responding, as for example dictated by Bayes' theorem. In light of much evidence that people are capable of normatively optimal behavior, belief polarization presents a puzzling exception. We show that Bayesian networks, or Bayes nets, can simulate rational belief updating. When fit to experimental data, Bayes nets can help identify the factors that contribute to polarization. We present a study into belief updating concerning the reality of climate change in response to information about the scientific consensus on anthropogenic global warming (AGW). The study used representative samples of Australian and U.S. Among Australians, consensus information partially neutralized the influence of worldview, with free-market supporters showing a greater increase in acceptance of human-caused global warming relative to free-market opponents. In contrast, while consensus information overall had a positive effect on perceived consensus among U.S. participants, there was a reduction in perceived consensus and acceptance of human-caused global warming for strong supporters of unregulated free markets. Fitting a Bayes net model to the data indicated that under a Bayesian framework, free-market support is a significant driver of beliefs about climate change and trust in climate scientists. Further, active distrust of climate scientists among a small number of U.S. conservatives drives contrary updating in response to consensus information among this particular group. Copyright © 2016 Cognitive Science Society, Inc.

  19. Model-based Bayesian signal extraction algorithm for peripheral nerves

    Science.gov (United States)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of

  20. A Bayesian Model of Category-Specific Emotional Brain Responses

    Science.gov (United States)

    Wager, Tor D.; Kang, Jian; Johnson, Timothy D.; Nichols, Thomas E.; Satpute, Ajay B.; Barrett, Lisa Feldman

    2015-01-01

    Understanding emotion is critical for a science of healthy and disordered brain function, but the neurophysiological basis of emotional experience is still poorly understood. We analyzed human brain activity patterns from 148 studies of emotion categories (2159 total participants) using a novel hierarchical Bayesian model. The model allowed us to classify which of five categories—fear, anger, disgust, sadness, or happiness—is engaged by a study with 66% accuracy (43-86% across categories). Analyses of the activity patterns encoded in the model revealed that each emotion category is associated with unique, prototypical patterns of activity across multiple brain systems including the cortex, thalamus, amygdala, and other structures. The results indicate that emotion categories are not contained within any one region or system, but are represented as configurations across multiple brain networks. The model provides a precise summary of the prototypical patterns for each emotion category, and demonstrates that a sufficient characterization of emotion categories relies on (a) differential patterns of involvement in neocortical systems that differ between humans and other species, and (b) distinctive patterns of cortical-subcortical interactions. Thus, these findings are incompatible with several contemporary theories of emotion, including those that emphasize emotion-dedicated brain systems and those that propose emotion is localized primarily in subcortical activity. They are consistent with componential and constructionist views, which propose that emotions are differentiated by a combination of perceptual, mnemonic, prospective, and motivational elements. Such brain-based models of emotion provide a foundation for new translational and clinical approaches. PMID:25853490

  1. Bayesian calibration of the Community Land Model using surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Hou, Zhangshuan; Huang, Maoyi; Swiler, Laura Painton

    2014-02-01

    We present results from the Bayesian calibration of hydrological parameters of the Community Land Model (CLM), which is often used in climate simulations and Earth system models. A statistical inverse problem is formulated for three hydrological parameters, conditional on observations of latent heat surface fluxes over 48 months. Our calibration method uses polynomial and Gaussian process surrogates of the CLM, and solves the parameter estimation problem using a Markov chain Monte Carlo sampler. Posterior probability densities for the parameters are developed for two sites with different soil and vegetation covers. Our method also allows us to examine the structural error in CLM under two error models. We find that surrogate models can be created for CLM in most cases. The posterior distributions are more predictive than the default parameter values in CLM. Climatologically averaging the observations does not modify the parameters' distributions significantly. The structural error model reveals a correlation time-scale which can be used to identify the physical process that could be contributing to it. While the calibrated CLM has a higher predictive skill, the calibration is under-dispersive.

  2. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  3. Bayesian Belief Networks Approach for Modeling Irrigation Behavior

    Science.gov (United States)

    Andriyas, S.; McKee, M.

    2012-12-01

    Canal operators need information to manage water deliveries to irrigators. Short-term irrigation demand forecasts can potentially valuable information for a canal operator who must manage an on-demand system. Such forecasts could be generated by using information about the decision-making processes of irrigators. Bayesian models of irrigation behavior can provide insight into the likely criteria which farmers use to make irrigation decisions. This paper develops a Bayesian belief network (BBN) to learn irrigation decision-making behavior of farmers and utilizes the resulting model to make forecasts of future irrigation decisions based on factor interaction and posterior probabilities. Models for studying irrigation behavior have been rarely explored in the past. The model discussed here was built from a combination of data about biotic, climatic, and edaphic conditions under which observed irrigation decisions were made. The paper includes a case study using data collected from the Canal B region of the Sevier River, near Delta, Utah. Alfalfa, barley and corn are the main crops of the location. The model has been tested with a portion of the data to affirm the model predictive capabilities. Irrigation rules were deduced in the process of learning and verified in the testing phase. It was found that most of the farmers used consistent rules throughout all years and across different types of crops. Soil moisture stress, which indicates the level of water available to the plant in the soil profile, was found to be one of the most significant likely driving forces for irrigation. Irrigations appeared to be triggered by a farmer's perception of soil stress, or by a perception of combined factors such as information about a neighbor irrigating or an apparent preference to irrigate on a weekend. Soil stress resulted in irrigation probabilities of 94.4% for alfalfa. With additional factors like weekend and irrigating when a neighbor irrigates, alfalfa irrigation

  4. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  5. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Directory of Open Access Journals (Sweden)

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  6. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  7. Modeling of Control Costs, Emissions, and Control Retrofits for Cost Effectiveness and Feasibility Analyses

    Science.gov (United States)

    Learn about EPA’s use of the Integrated Planning Model (IPM) to develop estimates of SO2 and NOx emission control costs, projections of futureemissions, and projections of capacity of future control retrofits, assuming controls on EGUs.

  8. Bayesian model averaging using particle filtering and Gaussian mixture modeling : Theory, concepts, and simulation experiments

    NARCIS (Netherlands)

    Rings, J.; Vrugt, J.A.; Schoups, G.; Huisman, J.A.; Vereecken, H.

    2012-01-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive

  9. Airline Sustainability Modeling: A New Framework with Application of Bayesian Structural Equation Modeling

    DEFF Research Database (Denmark)

    Salarzadeh Jenatabadi, Hashem; Babashamsi, Peyman; Khajeheian, Datis

    2016-01-01

    There are many factors which could influence the sustainability of airlines. The main purpose of this study is to introduce a framework for a financial sustainability index and model it based on structural equation modeling (SEM) with maximum likelihood and Bayesian predictors. The introduced...

  10. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis.

    Science.gov (United States)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D Gareth; Astley, Sue; Payne, Katherine

    2017-09-01

    To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1, risk 2, masking [supplemental screening for women with higher breast density], and masking and risk 1) compared with the current UK NBSP and no screening. The model assumed a lifetime horizon, the health service perspective to identify costs (£, 2015), and measured consequences in quality-adjusted life-years (QALYs). Multiple data sources were used: systematic reviews of effectiveness and utility, published studies reporting costs, and cohort studies embedded in existing NBSPs. Model parameter uncertainty was assessed using probabilistic sensitivity analysis and one-way sensitivity analysis. The base-case analysis, supported by probabilistic sensitivity analysis, suggested that the risk stratified NBSPs (risk 1 and risk-2) were relatively cost-effective when compared with the current UK NBSP, with incremental cost-effectiveness ratios of £16,689 per QALY and £23,924 per QALY, respectively. Stratified NBSP including masking approaches (supplemental screening for women with higher breast density) was not a cost-effective alternative, with incremental cost-effectiveness ratios of £212,947 per QALY (masking) and £75,254 per QALY (risk 1 and masking). When compared with no screening, all stratified NBSPs could be considered cost-effective. Key drivers of cost-effectiveness were discount rate, natural history model parameters, mammographic sensitivity, and biopsy rates for recalled cases. A key assumption was that the risk model used in the stratification process was perfectly calibrated to the population. This early model-based cost-effectiveness analysis provides indicative evidence for decision makers to understand the key drivers of costs and QALYs for exemplar stratified NBSP. Copyright

  11. Bayesian inference for hybrid discrete-continuous stochastic kinetic models

    International Nuclear Information System (INIS)

    Sherlock, Chris; Golightly, Andrew; Gillespie, Colin S

    2014-01-01

    We consider the problem of efficiently performing simulation and inference for stochastic kinetic models. Whilst it is possible to work directly with the resulting Markov jump process (MJP), computational cost can be prohibitive for networks of realistic size and complexity. In this paper, we consider an inference scheme based on a novel hybrid simulator that classifies reactions as either ‘fast’ or ‘slow’ with fast reactions evolving as a continuous Markov process whilst the remaining slow reaction occurrences are modelled through a MJP with time-dependent hazards. A linear noise approximation (LNA) of fast reaction dynamics is employed and slow reaction events are captured by exploiting the ability to solve the stochastic differential equation driving the LNA. This simulation procedure is used as a proposal mechanism inside a particle MCMC scheme, thus allowing Bayesian inference for the model parameters. We apply the scheme to a simple application and compare the output with an existing hybrid approach and also a scheme for performing inference for the underlying discrete stochastic model. (paper)

  12. Bayesian model calibration of ramp compression experiments on Z

    Science.gov (United States)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  13. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  14. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  15. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  16. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.

    2014-09-16

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models\\' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  17. Assessing fit in Bayesian models for spatial processes

    KAUST Repository

    Jun, M.; Katzfuss, M.; Hu, J.; Johnson, V. E.

    2014-01-01

    © 2014 John Wiley & Sons, Ltd. Gaussian random fields are frequently used to model spatial and spatial-temporal data, particularly in geostatistical settings. As much of the attention of the statistics community has been focused on defining and estimating the mean and covariance functions of these processes, little effort has been devoted to developing goodness-of-fit tests to allow users to assess the models' adequacy. We describe a general goodness-of-fit test and related graphical diagnostics for assessing the fit of Bayesian Gaussian process models using pivotal discrepancy measures. Our method is applicable for both regularly and irregularly spaced observation locations on planar and spherical domains. The essential idea behind our method is to evaluate pivotal quantities defined for a realization of a Gaussian random field at parameter values drawn from the posterior distribution. Because the nominal distribution of the resulting pivotal discrepancy measures is known, it is possible to quantitatively assess model fit directly from the output of Markov chain Monte Carlo algorithms used to sample from the posterior distribution on the parameter space. We illustrate our method in a simulation study and in two applications.

  18. Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example

    KAUST Repository

    Allmaras, Moritz; Bangerth, Wolfgang; Linhart, Jean Marie; Polanco, Javier; Wang, Fang; Wang, Kainan; Webster, Jennifer; Zedler, Sarah

    2013-01-01

    All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework

  19. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  20. Taxes and Subsidies for Improving Diet and Population Health in Australia: A Cost-Effectiveness Modelling Study.

    Science.gov (United States)

    Cobiac, Linda J; Tam, King; Veerman, Lennert; Blakely, Tony

    2017-02-01

    An increasing number of countries are implementing taxes on unhealthy foods and drinks to address the growing burden of dietary-related disease, but the cost-effectiveness of combining taxes on unhealthy foods and subsidies on healthy foods is not well understood. Using a population model of dietary-related diseases and health care costs and food price elasticities, we simulated the effect of taxes on saturated fat, salt, sugar, and sugar-sweetened beverages and a subsidy on fruits and vegetables, over the lifetime of the Australian population. The sizes of the taxes and subsidy were set such that, when combined as a package, there would be a negligible effect on average weekly expenditure on food (beverage tax (12,000 [95% UI: 2,100 to 21,000] DALYs). The fruit and vegetable subsidy (-13,000 [95% UI: -44,000 to 18,000] DALYs) was a cost-effective addition to the package of taxes. However, it did not necessarily lead to a net health benefit for the population when modelled as an intervention on its own, because of the possible adverse cross-price elasticity effects on consumption of other foods (e.g., foods high in saturated fat and salt). The study suggests that taxes and subsidies on foods and beverages can potentially be combined to achieve substantial improvements in population health and cost-savings to the health sector. However, the magnitude of health benefits is sensitive to measures of price elasticity, and further work is needed to incorporate potential benefits or harms associated with changes in other foods and nutrients that are not currently modelled, such as red and processed meats and fibre. With potentially large health benefits for the Australian population and large benefits in reducing health sector spending on the treatment of non-communicable diseases, the formulation of a tax and subsidy package should be given a more prominent role in Australia's public health nutrition strategy.

  1. Comparative review of three cost-effectiveness models for rotavirus vaccines in national immunization programs; a generic approach applied to various regions in the world

    Directory of Open Access Journals (Sweden)

    Tu Hong-Anh

    2011-07-01

    Full Text Available Abstract Background This study aims to critically review available cost-effectiveness models for rotavirus vaccination, compare their designs using a standardized approach and compare similarities and differences in cost-effectiveness outcomes using a uniform set of input parameters. Methods We identified various models used to estimate the cost-effectiveness of rotavirus vaccination. From these, results using a standardized dataset for four regions in the world could be obtained for three specific applications. Results Despite differences in the approaches and individual constituting elements including costs, QALYs Quality Adjusted Life Years and deaths, cost-effectiveness results of the models were quite similar. Differences between the models on the individual components of cost-effectiveness could be related to some specific features of the respective models. Sensitivity analysis revealed that cost-effectiveness of rotavirus vaccination is highly sensitive to vaccine prices, rotavirus-associated mortality and discount rates, in particular that for QALYs. Conclusions The comparative approach followed here is helpful in understanding the various models selected and will thus benefit (low-income countries in designing their own cost-effectiveness analyses using new or adapted existing models. Potential users of the models in low and middle income countries need to consider results from existing studies and reviews. There will be a need for contextualization including the use of country specific data inputs. However, given that the underlying biological and epidemiological mechanisms do not change between countries, users are likely to be able to adapt existing model designs rather than developing completely new approaches. Also, the communication established between the individual researchers involved in the three models is helpful in the further development of these individual models. Therefore, we recommend that this kind of comparative study

  2. The cost-effectiveness of influenza vaccination for people aged 50 to 64 years: an international model.

    Science.gov (United States)

    Aballéa, Samuel; Chancellor, Jeremy; Martin, Monique; Wutzler, Peter; Carrat, Fabrice; Gasparini, Roberto; Toniolo-Neto, Joao; Drummond, Michael; Weinstein, Milton

    2007-01-01

    Routine influenza vaccination is currently recommended in several countries for people aged more than 60 or 65 years or with high risk of complications. A lower age threshold of 50 years has been recommended in the United States since 1999. To help policymakers consider whether such a policy should be adopted more widely, we conducted an economic evaluation of lowering the age limit for routine influenza vaccination to 50 years in Brazil, France, Germany, and Italy. The probabilistic model was designed to compare in a single season the costs and clinical outcomes associated with two alternative vaccination policies for persons aged 50 to 64 years: reimbursement only for people at high risk of complications (current policy), and reimbursement for all individuals in this age group (proposed policy). Two perspectives were considered: third-party payer (TPP) and societal. Model inputs were obtained primarily from the published literature and validated through expert opinion. The historical distribution of annual influenza-like illness (ILI) incidence was used to simulate the uncertain incidence in any given season. We estimated gains in unadjusted and quality-adjusted life expectancy, and the cost per quality-adjusted life-year (QALY) gained. Deterministic and probabilistic sensitivity analyses were conducted. Comparing the proposed to the current policy, the estimated mean costs per QALY gained were R$4,100, EURO 13,200, EURO 31,400 and EURO 15,700 for Brazil, France, Germany, and Italy, respectively, from a TPP perspective. From the societal perspective, the age-based policy is predicted to yield net cost savings in Germany and Italy, whereas the cost per QALY decreased to R$2800 for Brazil and EURO 8000 for France. The results were particularly sensitive to the ILI incidence rate, vaccine uptake, influenza fatality rate, and the costs of administering vaccination. Assuming a cost-effectiveness threshold ratio of EURO 50,000 per QALY gained, the probabilities of the

  3. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  4. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  5. Context-dependent decision-making: a simple Bayesian model.

    Science.gov (United States)

    Lloyd, Kevin; Leslie, David S

    2013-05-06

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.

  6. A Bayesian approach to the modelling of α Cen A

    Science.gov (United States)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  7. Parameter Estimation of Structural Equation Modeling Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dewi Kurnia Sari

    2016-05-01

    Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.

  8. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Bayesian averaging over Decision Tree models for trauma severity scoring.

    Science.gov (United States)

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Bayesian network model of crowd emotion and negative behavior

    Science.gov (United States)

    Ramli, Nurulhuda; Ghani, Noraida Abdul; Hatta, Zulkarnain Ahmad; Hashim, Intan Hashimah Mohd; Sulong, Jasni; Mahudin, Nor Diana Mohd; Rahman, Shukran Abd; Saad, Zarina Mat

    2014-12-01

    The effects of overcrowding have become a major concern for event organizers. One aspect of this concern has been the idea that overcrowding can enhance the occurrence of serious incidents during events. As one of the largest Muslim religious gathering attended by pilgrims from all over the world, Hajj has become extremely overcrowded with many incidents being reported. The purpose of this study is to analyze the nature of human emotion and negative behavior resulting from overcrowding during Hajj events from data gathered in Malaysian Hajj Experience Survey in 2013. The sample comprised of 147 Malaysian pilgrims (70 males and 77 females). Utilizing a probabilistic model called Bayesian network, this paper models the dependence structure between different emotions and negative behaviors of pilgrims in the crowd. The model included the following variables of emotion: negative, negative comfortable, positive, positive comfortable and positive spiritual and variables of negative behaviors; aggressive and hazardous acts. The study demonstrated that emotions of negative, negative comfortable, positive spiritual and positive emotion have a direct influence on aggressive behavior whereas emotion of negative comfortable, positive spiritual and positive have a direct influence on hazardous acts behavior. The sensitivity analysis showed that a low level of negative and negative comfortable emotions leads to a lower level of aggressive and hazardous behavior. Findings of the study can be further improved to identify the exact cause and risk factors of crowd-related incidents in preventing crowd disasters during the mass gathering events.

  11. Bayesian models for astrophysical data using R, JAGS, Python, and Stan

    CERN Document Server

    Hilbe, Joseph M; Ishida, Emille E O

    2017-01-01

    This comprehensive guide to Bayesian methods in astronomy enables hands-on work by supplying complete R, JAGS, Python, and Stan code, to use directly or to adapt. It begins by examining the normal model from both frequentist and Bayesian perspectives and then progresses to a full range of Bayesian generalized linear and mixed or hierarchical models, as well as additional types of models such as ABC and INLA. The book provides code that is largely unavailable elsewhere and includes details on interpreting and evaluating Bayesian models. Initial discussions offer models in synthetic form so that readers can easily adapt them to their own data; later the models are applied to real astronomical data. The consistent focus is on hands-on modeling, analysis of data, and interpretations that address scientific questions. A must-have for astronomers, its concrete approach will also be attractive to researchers in the sciences more generally.

  12. BayesCLUMPY: BAYESIAN INFERENCE WITH CLUMPY DUSTY TORUS MODELS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Ramos Almeida, C.

    2009-01-01

    Our aim is to present a fast and general Bayesian inference framework based on the synergy between machine learning techniques and standard sampling methods and apply it to infer the physical properties of clumpy dusty torus using infrared photometric high spatial resolution observations of active galactic nuclei. We make use of the Metropolis-Hastings Markov Chain Monte Carlo algorithm for sampling the posterior distribution function. Such distribution results from combining all a priori knowledge about the parameters of the model and the information introduced by the observations. The main difficulty resides in the fact that the model used to explain the observations is computationally demanding and the sampling is very time consuming. For this reason, we apply a set of artificial neural networks that are used to approximate and interpolate a database of models. As a consequence, models not present in the original database can be computed ensuring continuity. We focus on the application of this solution scheme to the recently developed public database of clumpy dusty torus models. The machine learning scheme used in this paper allows us to generate any model from the database using only a factor of 10 -4 of the original size of the database and a factor of 10 -3 in computing time. The posterior distribution obtained for each model parameter allows us to investigate how the observations constrain the parameters and which ones remain partially or completely undetermined, providing statistically relevant confidence intervals. As an example, the application to the nuclear region of Centaurus A shows that the optical depth of the clouds, the total number of clouds, and the radial extent of the cloud distribution zone are well constrained using only six filters. The code is freely available from the authors.

  13. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Science.gov (United States)

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Bayesian Regression of Thermodynamic Models of Redox Active Materials

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, Katherine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from the model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).

  15. A budget-impact and cost-effectiveness model for second-line treatment of major depression.

    Science.gov (United States)

    Malone, Daniel C

    2007-07-01

    Depressed patients who initially fail to achieve remission when placed on a selective serotonin reuptake inhibitor (SSRI) may require a second treatment. The purpose of this study was to evaluate the effectiveness, cost, cost-effectiveness, and budget impact of second-line pharmacologic treatment for major depressive disorder (MDD). A cost-effectiveness analysis was conducted to evaluate second-line therapies (citalopram, escitalopram, fluoxetine, paroxetine, paroxetine controlled release [CR], sertraline, and venlafaxine extended release [XR]) for the treatment of depression. Effectiveness data were obtained from published clinical studies. The primary outcome was remission defined as a score of 7 or less on the Hamilton Rating Scale for Depression (HAM-D) or a score of 10 or less on the montgomery-Asberg Depression Rating Scale (MADRS) depression rating scales. The wholesale acquisition cost (WAC) for medications and medical treatment costs for depression were included. The perspective was derived from a managed care organization (MCO) with 500,000 members, a 1.9% annual incidence of depression, and treatment duration of 6 months. Assumptions included: second-line treatment is not as effective as first-line treatment, WAC price reflects MCO costs, and side effects were identical. Sensitivity analyses were conducted to determine variables that influenced the results. Second-line remission rates were 20.4% for venlafaxine XR, 16.9% for sertraline, 16.4% for escitalopram, 15.1% for generic SSRIs (weighted average), and 13.6% for paroxetine CR. Pharmacy costs ranged from $163 for generic SSRIs to $319 for venlafaxine SR. Total cost per patient achieving remission was $14,275 for venlafaxine SR, followed by $16,100 for escitalopram. The incremental cost-effectiveness ratio (ICER) for venlafaxine SR compared with generic SSRIs was $2,073 per patient achieving remission, followed by escitalopram with an ICER of $3,566. The model was most sensitive to other therapies

  16. The Cost and Cost-Effectiveness of Scaling up Screening and Treatment of Syphilis in Pregnancy: A Model

    Science.gov (United States)

    Kahn, James G.; Jiwani, Aliya; Gomez, Gabriela B.; Hawkes, Sarah J.; Chesson, Harrell W.; Broutet, Nathalie; Kamb, Mary L.; Newman, Lori M.

    2014-01-01

    Background Syphilis in pregnancy imposes a significant global health and economic burden. More than half of cases result in serious adverse events, including infant mortality and infection. The annual global burden from mother-to-child transmission (MTCT) of syphilis is estimated at 3.6 million disability-adjusted life years (DALYs) and $309 million in medical costs. Syphilis screening and treatment is simple, effective, and affordable, yet, worldwide, most pregnant women do not receive these services. We assessed cost-effectiveness of scaling-up syphilis screening and treatment in existing antenatal care (ANC) programs in various programmatic, epidemiologic, and economic contexts. Methods and Findings We modeled the cost, health impact, and cost-effectiveness of expanded syphilis screening and treatment in ANC, compared to current services, for 1,000,000 pregnancies per year over four years. We defined eight generic country scenarios by systematically varying three factors: current maternal syphilis testing and treatment coverage, syphilis prevalence in pregnant women, and the cost of healthcare. We calculated program and net costs, DALYs averted, and net costs per DALY averted over four years in each scenario. Program costs are estimated at $4,142,287 – $8,235,796 per million pregnant women (2010 USD). Net costs, adjusted for averted medical care and current services, range from net savings of $12,261,250 to net costs of $1,736,807. The program averts an estimated 5,754 – 93,484 DALYs, yielding net savings in four scenarios, and a cost per DALY averted of $24 – $111 in the four scenarios with net costs. Results were robust in sensitivity analyses. Conclusions Eliminating MTCT of syphilis through expanded screening and treatment in ANC is likely to be highly cost-effective by WHO-defined thresholds in a wide range of settings. Countries with high prevalence, low current service coverage, and high healthcare cost would benefit most. Future analyses can be

  17. Macroscopic Models of Clique Tree Growth for Bayesian Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — In clique tree clustering, inference consists of propagation in a clique tree compiled from a Bayesian network. In this paper, we develop an analytical approach to...

  18. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    Science.gov (United States)

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  19. Nitrate source apportionment in a subtropical watershed using Bayesian model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)

    2013-10-01

    Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better

  20. Nitrate source apportionment in a subtropical watershed using Bayesian model

    International Nuclear Information System (INIS)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai

    2013-01-01

    Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds

  1. A Review of Generic Preference-Based Measures for Use in Cost-Effectiveness Models.

    Science.gov (United States)

    Brazier, John; Ara, Roberta; Rowen, Donna; Chevrou-Severac, Helene

    2017-12-01

    Generic preference-based measures (GPBMs) of health are used to obtain the quality adjustment weight required to calculate the quality-adjusted life year in health economic models. GPBMs have been developed to use across different interventions and medical conditions and typically consist of a self-complete patient questionnaire, a health state classification system, and preference weights for all states defined by the classification system. Of the six main GPBMs, the three most frequently used are the Health Utilities Index version 3, the EuroQol 5 dimensions (3 and 5 levels), and the Short Form 6 dimensions. There are considerable differences in GPBMs in terms of the content and size of descriptive systems (i.e. the numbers of dimensions of health and levels of severity within these), the methods of valuation [e.g. time trade-off (TTO), standard gamble (SG)], and the populations (e.g. general population, patients) used to value the health states within the descriptive systems. Although GPBMs are anchored at 1 (full health) and 0 (dead), they produce different health state utility values when completed by the same patient. Considerations when selecting a measure for use in a clinical trial include practicality, reliability, validity and responsiveness. Requirements of reimbursement agencies may impose additional restrictions on suitable measures for use in economic evaluations, such as the valuation technique (TTO, SG) or the source of values (general public vs. patients).

  2. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  3. The Potential Cost Effectiveness of Different Dengue Vaccination Programmes in Malaysia: A Value-Based Pricing Assessment Using Dynamic Transmission Mathematical Modelling.

    Science.gov (United States)

    Shafie, Asrul Akmal; Yeo, Hui Yee; Coudeville, Laurent; Steinberg, Lucas; Gill, Balvinder Singh; Jahis, Rohani; Amar-Singh Hss

    2017-05-01

    Dengue disease poses a great economic burden in Malaysia. This study evaluated the cost effectiveness and impact of dengue vaccination in Malaysia from both provider and societal perspectives using a dynamic transmission mathematical model. The model incorporated sensitivity analyses, Malaysia-specific data, evidence from recent phase III studies and pooled efficacy and long-term safety data to refine the estimates from previous published studies. Unit costs were valued in $US, year 2013 values. Six vaccination programmes employing a three-dose schedule were identified as the most likely programmes to be implemented. In all programmes, vaccination produced positive benefits expressed as reductions in dengue cases, dengue-related deaths, life-years lost, disability-adjusted life-years and dengue treatment costs. Instead of incremental cost-effectiveness ratios (ICERs), we evaluated the cost effectiveness of the programmes by calculating the threshold prices for a highly cost-effective strategy [ICER price of $US32.39 for programme 6 (highly cost effective up to $US14.15) and up to a price of $US100.59 for programme 1 (highly cost effective up to $US47.96) from the provider perspective. The cost-effectiveness analysis is sensitive to under-reporting, vaccine protection duration and model time horizon. Routine vaccination for a population aged 13 years with a catch-up cohort aged 14-30 years in targeted hotspot areas appears to be the best-value strategy among those investigated. Dengue vaccination is a potentially good investment if the purchaser can negotiate a price at or below the cost-effective threshold price.

  4. Effectiveness and cost-effectiveness of serum B-type natriuretic peptide testing and monitoring in patients with heart failure in primary and secondary care: an evidence synthesis, cohort study and cost-effectiveness model.

    Science.gov (United States)

    Pufulete, Maria; Maishman, Rachel; Dabner, Lucy; Mohiuddin, Syed; Hollingworth, William; Rogers, Chris A; Higgins, Julian; Dayer, Mark; Macleod, John; Purdy, Sarah; McDonagh, Theresa; Nightingale, Angus; Williams, Rachael; Reeves, Barnaby C

    2017-08-01

    Heart failure (HF) affects around 500,000 people in the UK. HF medications are frequently underprescribed and B-type natriuretic peptide (BNP)-guided therapy may help to optimise treatment. To evaluate the clinical effectiveness and cost-effectiveness of BNP-guided therapy compared with symptom-guided therapy in HF patients. Systematic review, cohort study and cost-effectiveness model. A literature review and usual care in the NHS. (a) HF patients in randomised controlled trials (RCTs) of BNP-guided therapy; and (b) patients having usual care for HF in the NHS. Systematic review : BNP-guided therapy or symptom-guided therapy in primary or secondary care. Cohort study : BNP monitored (≥ 6 months' follow-up and three or more BNP tests and two or more tests per year), BNP tested (≥ 1 tests but not BNP monitored) or never tested. Cost-effectiveness model : BNP-guided therapy in specialist clinics. Mortality, hospital admission (all cause and HF related) and adverse events; and quality-adjusted life-years (QALYs) for the cost-effectiveness model. Systematic review : Individual participant or aggregate data from eligible RCTs. Cohort study : The Clinical Practice Research Datalink, Hospital Episode Statistics and National Heart Failure Audit (NHFA). A systematic literature search (five databases, trial registries, grey literature and reference lists of publications) for published and unpublished RCTs. Five RCTs contributed individual participant data (IPD) and eight RCTs contributed aggregate data (1536 participants were randomised to BNP-guided therapy and 1538 participants were randomised to symptom-guided therapy). For all-cause mortality, the hazard ratio (HR) for BNP-guided therapy was 0.87 [95% confidence interval (CI) 0.73 to 1.04]. Patients who were aged Chris A Rogers' and Maria Pufulete's time contributing to the study. Syed Mohiuddin's time is supported by the NIHR Collaboration for Leadership in Applied Health Research and Care West at University

  5. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  6. Bayesian inference for partially identified models exploring the limits of limited data

    CERN Document Server

    Gustafson, Paul

    2015-01-01

    Introduction Identification What Is against Us? What Is for Us? Some Simple Examples of Partially Identified ModelsThe Road Ahead The Structure of Inference in Partially Identified Models Bayesian Inference The Structure of Posterior Distributions in PIMs Computational Strategies Strength of Bayesian Updating, Revisited Posterior MomentsCredible Intervals Evaluating the Worth of Inference Partial Identification versus Model Misspecification The Siren Call of Identification Comp

  7. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  8. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  9. A Bayesian, generalized frailty model for comet assays.

    Science.gov (United States)

    Ghebretinsae, Aklilu Habteab; Faes, Christel; Molenberghs, Geert; De Boeck, Marlies; Geys, Helena

    2013-05-01

    This paper proposes a flexible modeling approach for so-called comet assay data regularly encountered in preclinical research. While such data consist of non-Gaussian outcomes in a multilevel hierarchical structure, traditional analyses typically completely or partly ignore this hierarchical nature by summarizing measurements within a cluster. Non-Gaussian outcomes are often modeled using exponential family models. This is true not only for binary and count data, but also for, example, time-to-event outcomes. Two important reasons for extending this family are for (1) the possible occurrence of overdispersion, meaning that the variability in the data may not be adequately described by the models, which often exhibit a prescribed mean-variance link, and (2) the accommodation of a hierarchical structure in the data, owing to clustering in the data. The first issue is dealt with through so-called overdispersion models. Clustering is often accommodated through the inclusion of random subject-specific effects. Though not always, one conventionally assumes such random effects to be normally distributed. In the case of time-to-event data, one encounters, for example, the gamma frailty model (Duchateau and Janssen, 2007 ). While both of these issues may occur simultaneously, models combining both are uncommon. Molenberghs et al. ( 2010 ) proposed a broad class of generalized linear models accommodating overdispersion and clustering through two separate sets of random effects. Here, we use this method to model data from a comet assay with a three-level hierarchical structure. Although a conjugate gamma random effect is used for the overdispersion random effect, both gamma and normal random effects are considered for the hierarchical random effect. Apart from model formulation, we place emphasis on Bayesian estimation. Our proposed method has an upper hand over the traditional analysis in that it (1) uses the appropriate distribution stipulated in the literature; (2) deals

  10. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  11. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    Science.gov (United States)

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  12. Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory.

    Science.gov (United States)

    Tauber, Sean; Navarro, Daniel J; Perfors, Amy; Steyvers, Mark

    2017-07-01

    Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models

    DEFF Research Database (Denmark)

    Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville

    2016-01-01

    The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...

  14. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  15. Bayesian joint modelling of benefit and risk in drug development.

    Science.gov (United States)

    Costa, Maria J; Drury, Thomas

    2018-05-01

    To gain regulatory approval, a new medicine must demonstrate that its benefits outweigh any potential risks, ie, that the benefit-risk balance is favourable towards the new medicine. For transparency and clarity of the decision, a structured and consistent approach to benefit-risk assessment that quantifies uncertainties and accounts for underlying dependencies is desirable. This paper proposes two approaches to benefit-risk evaluation, both based on the idea of joint modelling of mixed outcomes that are potentially dependent at the subject level. Using Bayesian inference, the two approaches offer interpretability and efficiency to enhance qualitative frameworks. Simulation studies show that accounting for correlation leads to a more accurate assessment of the strength of evidence to support benefit-risk profiles of interest. Several graphical approaches are proposed that can be used to communicate the benefit-risk balance to project teams. Finally, the two approaches are illustrated in a case study using real clinical trial data. Copyright © 2018 John Wiley & Sons, Ltd.

  16. Cost-effectiveness of a nurse practitioner-family physician model of care in a nursing home: controlled before and after study.

    Science.gov (United States)

    Lacny, Sarah; Zarrabi, Mahmood; Martin-Misener, Ruth; Donald, Faith; Sketris, Ingrid; Murphy, Andrea L; DiCenso, Alba; Marshall, Deborah A

    2016-09-01

    To examine the cost-effectiveness of a nurse practitioner-family physician model of care compared with family physician-only care in a Canadian nursing home. As demand for long-term care increases, alternative care models including nurse practitioners are being explored. Cost-effectiveness analysis using a controlled before-after design. The study included an 18-month 'before' period (2005-2006) and a 21-month 'after' time period (2007-2009). Data were abstracted from charts from 2008-2010. We calculated incremental cost-effectiveness ratios comparing the intervention (nurse practitioner-family physician model; n = 45) to internal (n = 65), external (n = 70) and combined internal/external family physician-only control groups, measured as the change in healthcare costs divided by the change in emergency department transfers/person-month. We assessed joint uncertainty around costs and effects using non-parametric bootstrapping and cost-effectiveness acceptability curves. Point estimates of the incremental cost-effectiveness ratio demonstrated the nurse practitioner-family physician model dominated the internal and combined control groups (i.e. was associated with smaller increases in costs and emergency department transfers/person-month). Compared with the external control, the intervention resulted in a smaller increase in costs and larger increase in emergency department transfers. Using a willingness-to-pay threshold of $1000 CAD/emergency department transfer, the probability the intervention was cost-effective compared with the internal, external and combined control groups was 26%, 21% and 25%. Due to uncertainty around the distribution of costs and effects, we were unable to make a definitive conclusion regarding the cost-effectiveness of the nurse practitioner-family physician model; however, these results suggest benefits that could be confirmed in a larger study. © 2016 John Wiley & Sons Ltd.

  17. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  18. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  19. Digitized Onondaga Lake Dissolved Oxygen Concentrations and Model Simulated Values using Bayesian Monte Carlo Methods

    Data.gov (United States)

    U.S. Environmental Protection Agency — The dataset is lake dissolved oxygen concentrations obtained form plots published by Gelda et al. (1996) and lake reaeration model simulated values using Bayesian...

  20. Results from evaluations of models and cost-effectiveness tools to support introduction decisions for new vaccines need critical appraisal

    Directory of Open Access Journals (Sweden)

    Moorthy Vasee

    2011-05-01

    Full Text Available Abstract The World Health Organization (WHO recommends that the cost-effectiveness (CE of introducing new vaccines be considered before such a programme is implemented. However, in low- and middle-income countries (LMICs, it is often challenging to perform and interpret the results of model-based economic appraisals of vaccines that benefit from locally relevant data. As a result, WHO embarked on a series of consultations to assess economic analytical tools to support vaccine introduction decisions for pneumococcal, rotavirus and human papillomavirus vaccines. The objectives of these assessments are to provide decision makers with a menu of existing CE tools for vaccines and their characteristics rather than to endorse the use of a single tool. The outcome will provide policy makers in LMICs with information about the feasibility of applying these models to inform their own decision making. We argue that if models and CE analyses are used to inform decisions, they ought to be critically appraised beforehand, including a transparent evaluation of their structure, assumptions and data sources (in isolation or in comparison to similar tools, so that decision makers can use them while being fully aware of their robustness and limitations.

  1. How to model mutually exclusive events based on independent causal pathways in Bayesian network models

    OpenAIRE

    Fenton, N.; Neil, M.; Lagnado, D.; Marsh, W.; Yet, B.; Constantinou, A.

    2016-01-01

    We show that existing Bayesian network (BN) modelling techniques cannot capture the correct intuitive reasoning in the important case when a set of mutually exclusive events need to be modelled as separate nodes instead of states of a single node. A previously proposed ‘solution’, which introduces a simple constraint node that enforces mutual exclusivity, fails to preserve the prior probabilities of the events, while other proposed solutions involve major changes to the original model. We pro...

  2. Comparative Cost-Effectiveness of Conservative or Intensive Blood Pressure Treatment Guidelines in Adults Aged 35-74 Years: The Cardiovascular Disease Policy Model.

    Science.gov (United States)

    Moise, Nathalie; Huang, Chen; Rodgers, Anthony; Kohli-Lynch, Ciaran N; Tzong, Keane Y; Coxson, Pamela G; Bibbins-Domingo, Kirsten; Goldman, Lee; Moran, Andrew E

    2016-07-01

    The population health effect and cost-effectiveness of implementing intensive blood pressure goals in high-cardiovascular disease (CVD) risk adults have not been described. Using the CVD Policy Model, CVD events, treatment costs, quality-adjusted life years, and drug and monitoring costs were simulated over 2016 to 2026 for hypertensive patients aged 35 to 74 years. We projected the effectiveness and costs of hypertension treatment according to the 2003 Joint National Committee (JNC)-7 or 2014 JNC8 guidelines, and then for adults aged ≥50 years, we assessed the cost-effectiveness of adding an intensive goal of systolic blood pressure cost-effectiveness ratios cost-effective. JNC7 strategies treat more patients and are more costly to implement compared with JNC8 strategies. Adding intensive systolic blood pressure goals for high-risk patients prevents an estimated 43 000 and 35 000 annual CVD events incremental to JNC8 and JNC7, respectively. Intensive strategies save costs in men and are cost-effective in women compared with JNC8 alone. At a willingness-to-pay threshold of $50 000 per quality-adjusted life years gained, JNC8+intensive had the highest probability of cost-effectiveness in women (82%) and JNC7+intensive the highest probability of cost-effectiveness in men (100%). Assuming higher drug and monitoring costs, adding intensive goals for high-risk patients remained consistently cost-effective in men, but not always in women. Among patients aged 35 to 74 years, adding intensive blood pressure goals for high-risk groups to current national hypertension treatment guidelines prevents additional CVD deaths while saving costs provided that medication costs are controlled. © 2016 American Heart Association, Inc.

  3. A cost-effectiveness model of smoking cessation based on a randomised controlled trial of varenicline versus placebo in patients with chronic obstructive pulmonary disease.

    Science.gov (United States)

    Lock, Kevin; Wilson, Koo; Murphy, Daniel; Riesco, Juan Antonio

    2011-12-01

    Smoking is an important risk factor in chronic obstructive pulmonary disease (COPD). A recent clinical trial demonstrated the efficacy of varenicline versus placebo as an aid to smoking cessation in patients with COPD. This study examines the cost-effectiveness of varenicline from the perspective of the healthcare systems of Spain (base case), the UK, France, Germany, Greece and Italy. A Markov model was developed to determine the cost-effectiveness of varenicline as an aid to smoking cessation, compared to a placebo, in a COPD population. Cost-effectiveness was determined by the incremental cost per quality-adjusted life year (QALY) gained. In the Spanish base case varenicline had an incremental cost of €1021/person for an average of 0.24 life years (0.17 QALYs), gained over the lifetime of a cohort of COPD patients, resulting in an incremental cost-effectiveness ratio (ICER) of €5,566. In the other European countries, the ICER varied between €4,519 (UK) and €10,167 (Italy). Probabilistic sensitivity analysis suggested varenicline had a high probability (>95%) of being cost-effective at a threshold of €30,000/QALY. Varenicline is expected to be a cost-effective aid to smoking cessation in COPD patients in all of the countries studied.

  4. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e, E-mail: julianapduarte@poli.ufrj.br, E-mail: victor.coppo.leite@poli.ufrj.br, E-mail: frutuoso@nuclear.ufrj.br [Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  5. A comparison between Markovian models and Bayesian networks for treating some dependent events in reliability evaluations

    International Nuclear Information System (INIS)

    Duarte, Juliana P.; Leite, Victor C.; Melo, P.F. Frutuoso e

    2013-01-01

    Bayesian networks have become a very handy tool for solving problems in various application areas. This paper discusses the use of Bayesian networks to treat dependent events in reliability engineering typically modeled by Markovian models. Dependent events play an important role as, for example, when treating load-sharing systems, bridge systems, common-cause failures, and switching systems (those for which a standby component is activated after the main one fails by means of a switching mechanism). Repair plays an important role in all these cases (as, for example, the number of repairmen). All Bayesian network calculations are performed by means of the Netica™ software, of Norsys Software Corporation, and Fortran 90 to evaluate them over time. The discussion considers the development of time-dependent reliability figures of merit, which are easily obtained, through Markovian models, but not through Bayesian networks, because these latter need probability figures as input and not failure and repair rates. Bayesian networks produced results in very good agreement with those of Markov models and pivotal decomposition. Static and discrete time (DTBN) Bayesian networks were used in order to check their capabilities of modeling specific situations, like switching failures in cold-standby systems. The DTBN was more flexible to modeling systems where the time of occurrence of an event is important, for example, standby failure and repair. However, the static network model showed as good results as DTBN by a much more simplified approach. (author)

  6. Cost-Effectiveness of a Chronic Care Model for Frail Older Adults in Primary Care: Economic Evaluation Alongside a Stepped-Wedge Cluster-Randomized Trial

    NARCIS (Netherlands)

    van Leeuwen, K.M.; Bosmans, J.E.; Jansen, A.P.D.; Hoogendijk, E.O.; Muntinga, M.E.; van Hout, H.P.J.; Nijpels, G.; van der Horst, H.E.; van Tulder, M.W.

    2015-01-01

    Objectives To evaluate the cost-effectiveness of the Geriatric Care Model (GCM), an integrated care model for frail older adults based on the Chronic Care Model, with that of usual care. Design Economic evaluation alongside a 24-month stepped-wedge cluster-randomized controlled trial. Setting

  7. Cost-Effectiveness of a Chronic Care Model for Frail Older Adults in Primary Care : Economic Evaluation Alongside a Stepped-Wedge Cluster-Randomized Trial

    NARCIS (Netherlands)

    van Leeuwen, Karen M; Bosmans, Judith E; Jansen, Aaltje P D; Hoogendijk, Emiel O; Muntinga, Maaike E; van Hout, Hein P J; Nijpels, Giel; van der Horst, Henriette E; van Tulder, Maurits W

    2015-01-01

    OBJECTIVES: To evaluate the cost-effectiveness of the Geriatric Care Model (GCM), an integrated care model for frail older adults based on the Chronic Care Model, with that of usual care. DESIGN: Economic evaluation alongside a 24-month stepped-wedge cluster-randomized controlled trial. SETTING:

  8. Population Screening for Hereditary Haemochromatosis in Australia: Construction and Validation of a State-Transition Cost-Effectiveness Model.

    Science.gov (United States)

    de Graaff, Barbara; Si, Lei; Neil, Amanda L; Yee, Kwang Chien; Sanderson, Kristy; Gurrin, Lyle C; Palmer, Andrew J

    2017-03-01

    HFE-associated haemochromatosis, the most common monogenic disorder amongst populations of northern European ancestry, is characterised by iron overload. Excess iron is stored in parenchymal tissues, leading to morbidity and mortality. Population screening programmes are likely to improve early diagnosis, thereby decreasing associated disease. Our aim was to develop and validate a health economics model of screening using utilities and costs from a haemochromatosis cohort. A state-transition model was developed with Markov states based on disease severity. Australian males (aged 30 years) and females (aged 45 years) of northern European ancestry were the target populations. The screening strategy was the status quo approach in Australia; the model was run over a lifetime horizon. Costs were estimated from the government perspective and reported in 2015 Australian dollars ($A); costs and quality-adjusted life-years (QALYs) were discounted at 5% annually. Model validity was assessed using goodness-of-fit analyses. Second-order Monte-Carlo simulation was used to account for uncertainty in multiple parameters. For validity, the model reproduced mortality, life expectancy (LE) and prevalence rates in line with published data. LE for C282Y homozygote males and females were 49.9 and 40.2 years, respectively, slightly lower than population rates. Mean (95% confidence interval) QALYS were 15.7 (7.7-23.7) for males and 14.4 (6.7-22.1) for females. Mean discounted lifetime costs for C282Y homozygotes were $A22,737 (3670-85,793) for males and $A13,840 (1335-67,377) for females. Sensitivity analyses revealed discount rates and prevalence had the greatest impacts on outcomes. We have developed a transparent, validated health economics model of C282Y homozygote haemochromatosis. The model will be useful to decision makers to identify cost-effective screening strategies.

  9. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  10. Taxes and Subsidies for Improving Diet and Population Health in Australia: A Cost-Effectiveness Modelling Study.

    Directory of Open Access Journals (Sweden)

    Linda J Cobiac

    2017-02-01

    Full Text Available An increasing number of countries are implementing taxes on unhealthy foods and drinks to address the growing burden of dietary-related disease, but the cost-effectiveness of combining taxes on unhealthy foods and subsidies on healthy foods is not well understood.Using a population model of dietary-related diseases and health care costs and food price elasticities, we simulated the effect of taxes on saturated fat, salt, sugar, and sugar-sweetened beverages and a subsidy on fruits and vegetables, over the lifetime of the Australian population. The sizes of the taxes and subsidy were set such that, when combined as a package, there would be a negligible effect on average weekly expenditure on food (<1% change. We evaluated the cost-effectiveness of the interventions individually, then determined the optimal combination based on maximising net monetary benefit at a threshold of AU$50,000 per disability-adjusted life year (DALY. The simulations suggested that the combination of taxes and subsidy might avert as many as 470,000 DALYs (95% uncertainty interval [UI]: 420,000 to 510,000 in the Australian population of 22 million, with a net cost-saving of AU$3.4 billion (95% UI: AU$2.4 billion to AU$4.6 billion; US$2.3 billion to the health sector. Of the taxes evaluated, the sugar tax produced the biggest estimates of health gain (270,000 [95% UI: 250,000 to 290,000] DALYs averted, followed by the salt tax (130,000 [95% UI: 120,000 to 140,000] DALYs, the saturated fat tax (97,000 [95% UI: 77,000 to 120,000] DALYs, and the sugar-sweetened beverage tax (12,000 [95% UI: 2,100 to 21,000] DALYs. The fruit and vegetable subsidy (-13,000 [95% UI: -44,000 to 18,000] DALYs was a cost-effective addition to the package of taxes. However, it did not necessarily lead to a net health benefit for the population when modelled as an intervention on its own, because of the possible adverse cross-price elasticity effects on consumption of other foods (e.g., foods high in

  11. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  12. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  13. Bayesian network as a modelling tool for risk management in agriculture

    DEFF Research Database (Denmark)

    Rasmussen, Svend; Madsen, Anders L.; Lund, Mogens

    . In this paper we use Bayesian networks as an integrated modelling approach for representing uncertainty and analysing risk management in agriculture. It is shown how historical farm account data may be efficiently used to estimate conditional probabilities, which are the core elements in Bayesian network models....... We further show how the Bayesian network model RiBay is used for stochastic simulation of farm income, and we demonstrate how RiBay can be used to simulate risk management at the farm level. It is concluded that the key strength of a Bayesian network is the transparency of assumptions......, and that it has the ability to link uncertainty from different external sources to budget figures and to quantify risk at the farm level....

  14. Applying a private sector capitation model to the management of type 2 diabetes in the South African public sector: a cost-effectiveness analysis.

    Science.gov (United States)

    Volmink, Heinrich C; Bertram, Melanie Y; Jina, Ruxana; Wade, Alisha N; Hofman, Karen J

    2014-09-30

    Diabetes mellitus contributes substantially to the non-communicable disease burden in South Africa. The proposed National Health Insurance system provides an opportunity to consider the development of a cost-effective capitation model of care for patients with type 2 diabetes. The objective of the study was to determine the potential cost-effectiveness of adapting a private sector diabetes management programme (DMP) to the South African public sector. Cost-effectiveness analysis was undertaken with a public sector model of the DMP as the intervention and a usual practice model as the comparator. Probabilistic modelling was utilized for incremental cost-effectiveness ratio analysis with life years gained selected as the outcome. Secondary data were used to design the model while cost information was obtained from various sources, taking into account public sector billing. Modelling found an incremental cost-effectiveness ratio (ICER) of ZAR 8 356 (USD 1018) per life year gained (LYG) for the DMP against the usual practice model. This fell substantially below the Willingness-to-Pay threshold with bootstrapping analysis. Furthermore, a national implementation of the intervention could potentially result in an estimated cumulative gain of 96 997 years of life (95% CI 71 073 years - 113 994 years). Probabilistic modelling found the capitation intervention to be cost-effective, with an ICER of ZAR 8 356 (USD 1018) per LYG. Piloting the service within the public sector is recommended as an initial step, as this would provide data for more accurate economic evaluation, and would also allow for qualitative analysis of the programme.

  15. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  16. A Bayesian hierarchical model for demand curve analysis.

    Science.gov (United States)

    Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T

    2018-07-01

    Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.

  17. Cost-effectiveness of the Mental Health and Development model for schizophrenia-spectrum and bipolar disorders in rural Kenya.

    Science.gov (United States)

    de Menil, V; Knapp, M; McDaid, D; Raja, S; Kingori, J; Waruguru, M; Wood, S K; Mannarath, S; Lund, C

    2015-10-01

    The treatment gap for serious mental disorders across low-income countries is estimated to be 89%. The model for Mental Health and Development (MHD) offers community-based care for people with mental disorders in 11 low- and middle-income countries. In Kenya, using a pre-post design, 117 consecutively enrolled participants with schizophrenia-spectrum and bipolar disorders were followed-up at 10 and 20 months. Comparison outcomes were drawn from the literature. Costs were analysed from societal and health system perspectives. From the societal perspective, MHD cost Int$ 594 per person in the first year and Int$ 876 over 2 years. The cost per healthy day gained was Int$ 7.96 in the first year and Int$ 1.03 over 2 years - less than the agricultural minimum wage. The cost per disability-adjusted life year averted over 2 years was Int$ 13.1 and Int$ 727 from the societal and health system perspectives, respectively, on par with antiretrovirals for HIV. MHD achieved increasing returns over time. The model appears cost-effective and equitable, especially over 2 years. Its affordability relies on multi-sectoral participation nationally and internationally.

  18. A health economic model to assess the cost-effectiveness of OPTIFAST® for the treatment of obesity in USA.

    Science.gov (United States)

    Nuijten, Mark; Marczewska, Agnieszka; Araujo Torress, Krysmaru; Rasouli, Bahareh; Perugini, Moreno

    2018-04-20

    Obesity is associated with high direct medical costs and indirect costs resulting from productivity loss. High prevalence of obesity generates a justified need to identify cost-effective weight loss approaches from a payer's perspective. Within the variety of weight management techniques, OPTIFAST ® is a clinically recognized and scientifically proven total meal replacement Low Calorie Diet that provides meaningful results in terms of weight loss and reduction in comorbidities. The objective of this study is assess potential cost-savings of OPTIFAST ® program in the USA, as compared to "no intervention" and pharmacotherapy. An event-driven decision analytic model was used to estimate payer's cost-savings from reimbursement of the 1-year OPTIFAST ® program over 3 years in the USA. The analysis was performed for the broad population of obese persons (BMI >30 kg/m 2 ) undergoing the OPTIFAST ® program versus liraglutide 3 mg, naltrexone/bupropion and versus "no intervention". The model included risk of complications related to increased BMI. Data sources included published literature, clinical trials, official USA price/tariff lists and national population statistics. The primary perspective was that of a USA payer; costs provided in 2016 US dollars. OPTIFAST ® leads over a period of 3 years to cost-savings of USD 9,285 per class I and II obese patient (BMI 30-39.9 kg/m 2 ) as compared to liraglutide and USD 685 as compared to naltrexone/bupropion. In the same time perspective, the OPTIFAST ® program leads to a reduction of cost of obesity complications of USD 1,951 as compared to "no intervention" with the incremental cost-effectiveness ratio of USD 6,475 per QALY. Scenario analyses show also substantial cost-savings in patients with class III obesity (BMI ≥40.0 kg/m 2 ) and patients with obesity (BMI 30-39.9 kg/m2) and type 2 diabetes versus all three previous comparators and bariatric surgery. Reimbursing OPTIFAST ® leads to meaningful cost

  19. Cost-effectiveness of omega-3 fatty acid supplements in parenteral nutrition therapy in hospitals: a discrete event simulation model.

    Science.gov (United States)

    Pradelli, Lorenzo; Eandi, Mario; Povero, Massimiliano; Mayer, Konstantin; Muscaritoli, Maurizio; Heller, Axel R; Fries-Schaffner, Eva

    2014-10-01

    A recent meta-analysis showed that supplementation of omega-3 fatty acids in parenteral nutrition (PN) regimens is associated with a statistically and clinically significant reduction in infection rate, and length of hospital stay (LOS) in medical and surgical patients admitted to the ICU and in surgical patients not admitted to the ICU. The objective of this present study was to evaluate the cost-effectiveness of the addition of omega-3 fatty acids to standard PN regimens in four European countries (Italy, France, Germany and the UK) from the healthcare provider perspective. Using a discrete event simulation scheme, a patient-level simulation model was developed, based on outcomes from the Italian ICU patient population and published literature. Comparative efficacy data for PN regimens containing omega-3 fatty acids versus standard PN regimens was taken from the meta-analysis of published randomised clinical trials (n = 23 studies with a total of 1502 patients), and hospital LOS reduction was further processed in order to split the reduction in ICU stay from that in-ward stays for patients admitted to the ICU. Country-specific cost data was obtained for Italian, French, German and UK healthcare systems. Clinical outcomes included in the model were death rates, nosocomial infection rates, and ICU/hospital LOS. Probabilistic and deterministic sensitivity analyses were undertaken to test the reliability of results. PN regimens containing omega-3 fatty acids were more effective on average than standard PN both in ICU and in non-ICU patients in the four countries considered, reducing infection rates and overall LOS, and resulting in a lower total cost per patient. Overall costs for patients receiving PN regimens containing omega-3 fatty acids were between €14 144 to €19 825 per ICU patient and €5484 to €14 232 per non-ICU patient, translating into savings of between €3972 and €4897 per ICU patient and savings of between €561 and €1762 per non

  20. Clinical effectiveness and cost-effectiveness of second- and third-generation left ventricular assist devices as either bridge to transplant or alternative to transplant for adults eligible for heart transplantation: systematic review and cost-effectiveness model.

    Science.gov (United States)

    Sutcliffe, P; Connock, M; Pulikottil-Jacob, R; Kandala, N-B; Suri, G; Gurung, T; Grove, A; Shyangdan, D; Briscoe, S; Maheswaran, H; Clarke, A

    2013-11-01

    of second- and third-generation US Food and Drug Administration (FDA) and/or Conformité Européenne (CE) approved VADs. Publications from the last 5 years with control groups, or case series with 50 or more patients were included. Outcomes included survival, functional capacity (e.g. change in New York Heart Association functional classification), quality of life (QoL) and adverse events. Data from the BTDB were obtained. A discrete-time, semi-Markov, multistate model was built. Deterministic and probabilistic methods with multiple sensitivity analyses varying survival, utilities and cost inputs to the model were used. Model outputs were incremental cost-effectiveness ratios (ICERs), cost/quality-adjusted life-years (QALYs) gained and cost/life-year gained (LYG). The discount rate was 3.5% and the time horizon varied over 3 years, 10 years and lifetime. Forty publications reported clinical effectiveness of VADs and one study reported cost-effectiveness. We found no high-quality comparative empirical studies of VADs as BTT compared with MM or as ATT compared with BTT. Approximately 15-25% of the patients receiving a device had died by 12 months. Studies reported the following wide ranges for adverse events: 4-27% bleeding requiring transfusion; 1.5-40% stroke; 3.3-48% infection; 1-14% device failure; 3-30% HF; 11-32% reoperation; and 3-53% renal failure. QoL and functional status were reported as improved in studies of two devices [HeartMate II (HMII; Thoratec Inc., Pleasanton, CA, USA) and HeartWare (HW; HeartWare Inc., Framingham, MA, USA)]. At 3 years, 10 years and lifetime, the ICERs for VADs as BTT compared with MM were £122,730, £68,088 and £55,173 respectively. These values were stable to changes in survival of the MM group. Both QoL and costs were reduced by VADs as ATT compared with VADs as BTT giving ICERs in south-west quadrant of the cost effectiveness plain (cost saving/QALY sacrificed) of £353,467, £31,685 and £20,637 over the 3 years, 10 years

  1. A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.

    Science.gov (United States)

    Houseman, E Andres; Virji, M Abbas

    2017-08-01

    Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates

  2. Bayesian estimation of regularization parameters for deformable surface models

    International Nuclear Information System (INIS)

    Cunningham, G.S.; Lehovich, A.; Hanson, K.M.

    1999-01-01

    In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames

  3. Public health impact and cost effectiveness of routine childhood vaccination for hepatitis a in Jordan: a dynamic model approach.

    Science.gov (United States)

    Hayajneh, Wail A; Daniels, Vincent J; James, Cerise K; Kanıbir, Muhammet Nabi; Pilsbury, Matthew; Marks, Morgan; Goveia, Michelle G; Elbasha, Elamin H; Dasbach, Erik; Acosta, Camilo J

    2018-03-07

    As the socioeconomic conditions in Jordan have improved over recent decades the disease and economic burden of Hepatitis A has increased. The purpose of this study is to assess the potential health and economic impact of a two-dose hepatitis A vaccine program covering one-year old children in Jordan. We adapted an age-structured population model of hepatitis A transmission dynamics to project the epidemiologic and economic impact of vaccinating one-year old children for 50 years in Jordan. The epidemiologic model was calibrated using local data on hepatitis A in Jordan. These data included seroprevalence and incidence data from the Jordan Ministry of Health as well as hospitalization data from King Abdullah University Hospital in Irbid, Jordan. We assumed 90% of all children would be vaccinated with the two-dose regimen by two years of age. The economic evaluation adopted a societal perspective and measured benefits using the quality-adjusted life-year (QALY). The modeled vaccination program reduced the incidence of hepatitis A in Jordan by 99%, 50 years after its introduction. The model projected 4.26 million avoided hepatitis A infections, 1.42 million outpatient visits, 22,475 hospitalizations, 508 fulminant cases, 95 liver transplants, and 76 deaths over a 50 year time horizon. In addition, we found, over a 50 year time horizon, the vaccination program would gain 37,502 QALYs and save over $42.6 million in total costs. The vaccination program became cost-saving within 6 years of its introduction and was highly cost-effective during the first 5 years. A vaccination program covering one-year old children is projected to be a cost-saving intervention that will significantly reduce the public health and economic burden of hepatitis A in Jordan.

  4. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming; Song, Qifan; Yu, Kai

    2013-01-01

    criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening

  5. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  6. BAYESIAN FORECASTS COMBINATION TO IMPROVE THE ROMANIAN INFLATION PREDICTIONS BASED ON ECONOMETRIC MODELS

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu

    2014-12-01

    Full Text Available There are many types of econometric models used in predicting the inflation rate, but in this study we used a Bayesian shrinkage combination approach. This methodology is used in order to improve the predictions accuracy by including information that is not captured by the econometric models. Therefore, experts’ forecasts are utilized as prior information, for Romania these predictions being provided by Institute for Economic Forecasting (Dobrescu macromodel, National Commission for Prognosis and European Commission. The empirical results for Romanian inflation show the superiority of a fixed effects model compared to other types of econometric models like VAR, Bayesian VAR, simultaneous equations model, dynamic model, log-linear model. The Bayesian combinations that used experts’ predictions as priors, when the shrinkage parameter tends to infinite, improved the accuracy of all forecasts based on individual models, outperforming also zero and equal weights predictions and naïve forecasts.

  7. A Danish cost-effectiveness model of escitalopram in comparison with citalopram and venlafaxine as first-line treatments for major depressive disorder in primary care

    DEFF Research Database (Denmark)

    Sørensen, Jan; Stage, Kurt B; Damsbo, Niels

    2007-01-01

    The objective of this study was to model the cost-effectiveness of escitalopram in comparison with generic citalopram and venlafaxine in primary care treatment of major depressive disorder (baseline scores 22-40 on the Montgomery-Asberg Depression Rating Scale, MADRS) in Denmark. A three-path dec...... clinical benefit and cost-savings, and similar in cost-effectiveness to venlafaxine.......The objective of this study was to model the cost-effectiveness of escitalopram in comparison with generic citalopram and venlafaxine in primary care treatment of major depressive disorder (baseline scores 22-40 on the Montgomery-Asberg Depression Rating Scale, MADRS) in Denmark. A three......, ad-hoc survey and expert opinion. Main outcome measures were remission defined as MADRS costs. Analyses were conducted from healthcare system and societal perspectives. The human capital approach was used to estimate societal cost of lost productivity. Costs were reported...

  8. Cost-effectiveness of interventions for increasing the possession of functioning smoke alarms in households with pre-school children: a modelling study.

    Science.gov (United States)

    Saramago, Pedro; Cooper, Nicola J; Sutton, Alex J; Hayes, Mike; Dunn, Ken; Manca, Andrea; Kendrick, Denise

    2014-05-16

    The UK has one of the highest rates for deaths from fire and flames in children aged 0-14 years compared to other high income countries. Evidence shows that smoke alarms can reduce the risk of fire-related injury but little exists on their cost-effectiveness. We aimed to compare the cost effectiveness of different interventions for the uptake of 'functioning' smoke alarms and consequently for the prevention of fire-related injuries in children in the UK. We carried out a decision model-based probabilistic cost-effectiveness analysis. We used a hypothetical population of newborns and evaluated the impact of living in a household with or without a functioning smoke alarm during the first 5 years of their life on overall lifetime costs and quality of life from a public health perspective. We compared seven interventions, ranging from usual care to more complex interventions comprising of education, free/low cost equipment giveaway, equipment fitting and/or home safety inspection. Education and free/low cost equipment was the most cost-effective intervention with an estimated incremental cost-effectiveness ratio of £34,200 per QALY gained compared to usual care. This was reduced to approximately £4,500 per QALY gained when 1.8 children under the age of 5 were assumed per household. Assessing cost-effectiveness, as well as effectiveness, is important in a public sector system operating under a fixed budget restraint. As highlighted in this study, the more effective interventions (in this case the more complex interventions) may not necessarily be the ones considered the most cost-effective.

  9. Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model

    KAUST Repository

    Mo, Qianxing

    2010-01-29

    ChIP-chip experiments are procedures that combine chromatin immunoprecipitation (ChIP) and DNA microarray (chip) technology to study a variety of biological problems, including protein-DNA interaction, histone modification, and DNA methylation. The most important feature of ChIP-chip data is that the intensity measurements of probes are spatially correlated because the DNA fragments are hybridized to neighboring probes in the experiments. We propose a simple, but powerful Bayesian hierarchical approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic resolutions. The model parameters are estimated using the Gibbs sampler. The proposed method is illustrated using two publicly available data sets from Affymetrix and Agilent platforms, and compared with three alternative Bayesian methods, namely, Bayesian hierarchical model, hierarchical gamma mixture model, and Tilemap hidden Markov model. The numerical results indicate that the proposed method performs as well as the other three methods for the data from Affymetrix tiling arrays, but significantly outperforms the other three methods for the data from Agilent promoter arrays. In addition, we find that the proposed method has better operating characteristics in terms of sensitivities and false discovery rates under various scenarios. © 2010, The International Biometric Society.

  10. Bayesian spatial modeling of HIV mortality via zero-inflated Poisson models.

    Science.gov (United States)

    Musal, Muzaffer; Aktekin, Tevfik

    2013-01-30

    In this paper, we investigate the effects of poverty and inequality on the number of HIV-related deaths in 62 New York counties via Bayesian zero-inflated Poisson models that exhibit spatial dependence. We quantify inequality via the Theil index and poverty via the ratios of two Census 2000 variables, the number of people under the poverty line and the number of people for whom poverty status is determined, in each Zip Code Tabulation Area. The purpose of this study was to investigate the effects of inequality and poverty in addition to spatial dependence between neighboring regions on HIV mortality rate, which can lead to improved health resource allocation decisions. In modeling county-specific HIV counts, we propose Bayesian zero-inflated Poisson models whose rates are functions of both covariate and spatial/random effects. To show how the proposed models work, we used three different publicly available data sets: TIGER Shapefiles, Census 2000, and mortality index files. In addition, we introduce parameter estimation issues of Bayesian zero-inflated Poisson models and discuss MCMC method implications. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  12. Cost-Effectiveness Model for Chemoimmunotherapy Options in Patients with Previously Untreated Chronic Lymphocytic Leukemia Unsuitable for Full-Dose Fludarabine-Based Therapy.

    Science.gov (United States)

    Becker, Ursula; Briggs, Andrew H; Moreno, Santiago G; Ray, Joshua A; Ngo, Phuong; Samanta, Kunal

    2016-06-01

    To evaluate the cost-effectiveness of treatment with anti-CD20 monoclonal antibody obinutuzumab plus chlorambucil (GClb) in untreated patients with chronic lymphocytic leukemia unsuitable for full-dose fludarabine-based therapy. A Markov model was used to assess the cost-effectiveness of GClb versus other chemoimmunotherapy options. The model comprised three mutually exclusive health states: "progression-free survival (with/without therapy)", "progression (refractory/relapsed lines)", and "death". Each state was assigned a health utility value representing patients' quality of life and a specific cost value. Comparisons between GClb and rituximab plus chlorambucil or only chlorambucil were performed using patient-level clinical trial data; other comparisons were performed via a network meta-analysis using information gathered in a systematic literature review. To support the model, a utility elicitation study was conducted from the perspective of the UK National Health Service. There was good agreement between the model-predicted progression-free and overall survival and that from the CLL11 trial. On incorporating data from the indirect treatment comparisons, it was found that GClb was cost-effective with a range of incremental cost-effectiveness ratios below a threshold of £30,000 per quality-adjusted life-year gained, and remained so during deterministic and probabilistic sensitivity analyses under various scenarios. GClb was estimated to increase both quality-adjusted life expectancy and treatment costs compared with several commonly used therapies, with incremental cost-effectiveness ratios below commonly referenced UK thresholds. This article offers a real example of how to combine direct and indirect evidence in a cost-effectiveness analysis of oncology drugs. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Improving the cost-effectiveness of a healthcare system for depressive disorders by implementing telemedicine: a health economic modeling study.

    Science.gov (United States)

    Lokkerbol, Joran; Adema, Dirk; Cuijpers, Pim; Reynolds, Charles F; Schulz, Richard; Weehuizen, Rifka; Smit, Filip

    2014-03-01

    Depressive disorders are significant causes of disease burden and are associated with substantial economic costs. It is therefore important to design a healthcare system that can effectively manage depression at sustainable costs. This article computes the benefit-to-cost ratio of the current Dutch healthcare system for depression, and investigates whether offering more online preventive interventions improves the cost-effectiveness overall. A health economic (Markov) model was used to synthesize clinical and economic evidence and to compute population-level costs and effects of interventions. The model compared a base case scenario without preventive telemedicine and alternative scenarios with preventive telemedicine. The central outcome was the benefit-to-cost ratio, also known as return-on-investment (ROI). In terms of ROI, a healthcare system with preventive telemedicine for depressive disorders offers better value for money than a healthcare system without Internet-based prevention. Overall, the ROI increases from €1.45 ($1.72) in the base case scenario to €1.76 ($2.09) in the alternative scenario in which preventive telemedicine is offered. In a scenario in which the costs of offering preventive telemedicine are balanced by reducing the expenditures for curative interventions, ROI increases to €1.77 ($2.10), while keeping the healthcare budget constant. For a healthcare system for depressive disorders to remain economically sustainable, its cost-benefit ratio needs to be improved. Offering preventive telemedicine at a large scale is likely to introduce such an improvement. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  15. A comparison of the cost-effectiveness of in vitro fertilization strategies and stimulated intrauterine insemination in a Canadian health economic model.

    Science.gov (United States)

    Bhatt, Taimur; Baibergenova, Akerke

    2008-05-01

    In vitro fertilization (IVF) with single embryo transfer (SET) has been proposed as a means of reducing multiple pregnancies associated with infertility treatment. All existing cost-effectiveness studies of IVF-SET have compared it with IVF with multiple embryo transfer but not with intrauterine insemination with gonadotropin stimulation (sIUI). We conducted a systematic review of studies of cost-effectiveness of IVF-SET versus IVF with double embryo transfer (DET). Further, we developed a health economy model that compared three strategies: (1) IVF-SET, (2) IVF-DET, and (3) sIUI. The decision analysis considered three cycles for each treatment option. IVF treatment was assumed to be a combination of cycles with transfer of fresh and frozen-thawed embryos. Probabilities used to populate the model were taken from published randomized clinical trials and observational studies. Cost estimates were based on average costs of associated procedures in Canada. The results of published studies on the cost-effectiveness of IVF-SET versus IVF-DET were not consistent. In our analysis, IVF-DET proved to be the most cost-effective strategy at $35,144/live birth, followed by sIUI at $66,960/live birth, and IVF-SET at $109,358/live birth. The results were sensitive both to the cost of IVF cycles and to the probability of live birth. This economic analysis showed that IVF-DET was the most cost-effective strategy of the options, and IVF-SET was the least cost-effective. The results in this model were insensitive to various probability inputs and to the costs associated with sIUI and IVF procedures.

  16. Parameterizing Bayesian network Representations of Social-Behavioral Models by Expert Elicitation

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Stephen J.; Dalton, Angela C.; Whitney, Paul D.; White, Amanda M.

    2010-05-23

    Bayesian networks provide a general framework with which to model many natural phenomena. The mathematical nature of Bayesian networks enables a plethora of model validation and calibration techniques: e.g parameter estimation, goodness of fit tests, and diagnostic checking of the model assumptions. However, they are not free of shortcomings. Parameter estimation from relevant extant data is a common approach to calibrating the model parameters. In practice it is not uncommon to find oneself lacking adequate data to reliably estimate all model parameters. In this paper we present the early development of a novel application of conjoint analysis as a method for eliciting and modeling expert opinions and using the results in a methodology for calibrating the parameters of a Bayesian network.

  17. Fast and accurate Bayesian model criticism and conflict diagnostics using R-INLA

    KAUST Repository

    Ferkingstad, Egil

    2017-10-16

    Bayesian hierarchical models are increasingly popular for realistic modelling and analysis of complex data. This trend is accompanied by the need for flexible, general and computationally efficient methods for model criticism and conflict detection. Usually, a Bayesian hierarchical model incorporates a grouping of the individual data points, as, for example, with individuals in repeated measurement data. In such cases, the following question arises: Are any of the groups “outliers,” or in conflict with the remaining groups? Existing general approaches aiming to answer such questions tend to be extremely computationally demanding when model fitting is based on Markov chain Monte Carlo. We show how group-level model criticism and conflict detection can be carried out quickly and accurately through integrated nested Laplace approximations (INLA). The new method is implemented as a part of the open-source R-INLA package for Bayesian computing (http://r-inla.org).

  18. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  19. A Tutorial Introduction to Bayesian Models of Cognitive Development

    Science.gov (United States)

    2011-01-01

    typewriter with an infinite amount of paper. There is a space of documents that it is capable of producing, which includes things like The Tempest and does...not include, say, a Vermeer painting or a poem written in Russian. This typewriter represents a means of generating the hypothesis space for a Bayesian...learner: each possible document that can be typed on it is a hypothesis, the infinite set of documents producible by the typewriter is the latent

  20. Integrated HIV testing, malaria, and diarrhea prevention campaign in Kenya: modeled health impact and cost-effectiveness.

    Science.gov (United States)

    Kahn, James G; Muraguri, Nicholas; Harris, Brian; Lugada, Eric; Clasen, Thomas; Grabowsky, Mark; Mermin, Jonathan; Shariff, Shahnaaz

    2012-01-01

    Efficiently delivered interventions to reduce HIV, malaria, and diarrhea are essential to accelerating global health efforts. A 2008 community integrated prevention campaign in Western Province, Kenya, reached 47,000 individuals over 7 days, providing HIV testing and counseling, water filters, insecticide-treated bed nets, condoms, and for HIV-infected individuals cotrimoxazole prophylaxis and referral for ongoing care. We modeled the potential cost-effectiveness of a scaled-up integrated prevention campaign. We estimated averted deaths and disability-adjusted life years (DALYs) based on published data on baseline mortality and morbidity and on the protective effect of interventions, including antiretroviral therapy. We incorporate a previously estimated scaled-up campaign cost. We used published costs of medical care to estimate savings from averted illness (for all three diseases) and the added costs of initiating treatment earlier in the course of HIV disease. Per 1000 participants, projected reductions in cases of diarrhea, malaria, and HIV infection avert an estimated 16.3 deaths, 359 DALYs and $85,113 in medical care costs. Earlier care for HIV-infected persons adds an estimated 82 DALYs averted (to a total of 442), at a cost of $37,097 (reducing total averted costs to $48,015). Accounting for the estimated campaign cost of $32,000, the campaign saves an estimated $16,015 per 1000 participants. In multivariate sensitivity analyses, 83% of simulations result in net savings, and 93% in a cost per DALY averted of less than $20. A mass, rapidly implemented campaign for HIV testing, safe water, and malaria control appears economically attractive.

  1. Integrated HIV testing, malaria, and diarrhea prevention campaign in Kenya: modeled health impact and cost-effectiveness.

    Directory of Open Access Journals (Sweden)

    James G Kahn

    Full Text Available Efficiently delivered interventions to reduce HIV, malaria, and diarrhea are essential to accelerating global health efforts. A 2008 community integrated prevention campaign in Western Province, Kenya, reached 47,000 individuals over 7 days, providing HIV testing and counseling, water filters, insecticide-treated bed nets, condoms, and for HIV-infected individuals cotrimoxazole prophylaxis and referral for ongoing care. We modeled the potential cost-effectiveness of a scaled-up integrated prevention campaign.We estimated averted deaths and disability-adjusted life years (DALYs based on published data on baseline mortality and morbidity and on the protective effect of interventions, including antiretroviral therapy. We incorporate a previously estimated scaled-up campaign cost. We used published costs of medical care to estimate savings from averted illness (for all three diseases and the added costs of initiating treatment earlier in the course of HIV disease.Per 1000 participants, projected reductions in cases of diarrhea, malaria, and HIV infection avert an estimated 16.3 deaths, 359 DALYs and $85,113 in medical care costs. Earlier care for HIV-infected persons adds an estimated 82 DALYs averted (to a total of 442, at a cost of $37,097 (reducing total averted costs to $48,015. Accounting for the estimated campaign cost of $32,000, the campaign saves an estimated $16,015 per 1000 participants. In multivariate sensitivity analyses, 83% of simulations result in net savings, and 93% in a cost per DALY averted of less than $20.A mass, rapidly implemented campaign for HIV testing, safe water, and malaria control appears economically attractive.

  2. The impact and cost-effectiveness of nonavalent HPV vaccination in the United States: Estimates from a simplified transmission model.

    Science.gov (United States)

    Chesson, Harrell W; Markowitz, Lauri E; Hariri, Susan; Ekwueme, Donatus U; Saraiya, Mona

    2016-06-02

    The objective of this study was to assess the incremental costs and benefits of the 9-valent HPV vaccine (9vHPV) compared with the quadrivalent HPV vaccine (4vHPV). Like 4vHPV, 9vHPV protects against HPV types 6, 11, 16, and 18. 9vHPV also protects against 5 additional HPV types 31, 33, 45, 52, and 58. We adapted a previously published model of the impact and cost-effectiveness of 4vHPV to include the 5 additional HPV types in 9vHPV. The vaccine strategies we examined were (1) 4vHPV for males and females; (2) 9vHPV for females and 4vHPV for males; and (3) 9vHPV for males and females. In the base case, 9vHPV cost $13 more per dose than 4vHPV, based on available vaccine price information. Providing 9vHPV to females compared with 4vHPV for females (assuming 4vHPV for males in both scenarios) was cost-saving regardless of whether or not cross-protection for 4vHPV was assumed. The cost per quality-adjusted life year (QALY) gained by 9vHPV for both sexes (compared with 4vHPV for both sexes) was < $0 (cost-saving) when assuming no cross-protection for 4vHPV and $8,600 when assuming cross-protection for 4vHPV. Compared with a vaccination program of 4vHPV for both sexes, a vaccination program of 9vHPV for both sexes can improve health outcomes and can be cost-saving.

  3. Evaluation of a Stratified National Breast Screening Program in the United Kingdom : An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, Ewan; Donten, Anna; Karssemeijer, Nico; van Gils, Carla; Evans, D. Gareth R.; Astley, Sue; Payne, Katherine

    Objectives: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. Methods: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  4. Evaluation of a Stratified National Breast Screening Program in the United Kingdom: An Early Model-Based Cost-Effectiveness Analysis

    NARCIS (Netherlands)

    Gray, E.; Donten, A.; Karssemeijer, N.; Gils, C. van; Evans, D.G.; Astley, S.; Payne, K.

    2017-01-01

    OBJECTIVES: To identify the incremental costs and consequences of stratified national breast screening programs (stratified NBSPs) and drivers of relative cost-effectiveness. METHODS: A decision-analytic model (discrete event simulation) was conceptualized to represent four stratified NBSPs (risk 1,

  5. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  6. Predictive models for pressure ulcers from intensive care unit electronic health records using Bayesian networks.

    Science.gov (United States)

    Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu

    2017-07-05

    We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers

  7. Cost-Effectiveness Model for Youth EFNEP Programs: What Do We Measure and How Do We Do It?

    Science.gov (United States)

    Serrano, Elena; McFerren, Mary; Lambur, Michael; Ellerbock, Michael; Hosig, Kathy; Franz, Nancy; Townsend, Marilyn; Baker, Susan; Muennig, Peter; Davis, George

    2011-01-01

    The Youth Expanded Food and Nutrition Education Program (EFNEP) is one of the United States Department of Agriculture's hallmark nutrition education programs for limited-resource youth. The objective of this study was to gather opinions from experts in EFNEP and related content areas to identify costs, effects (impacts), and related instruments to…

  8. Cost-effectiveness of counseling and pedometer use to increase physical activity in the Netherlands: a modeling study

    NARCIS (Netherlands)

    E.A.B. Over (Eelco); G.C.W. Wendel-Vos (Wanda); M. van den Berg (Matthijs); H.H.H. Reenen (Heleen ); L. Tariq (Luqman); R.T. Hoogenveen (Rudolf); P.H.M. Van Baal (Pieter)

    2012-01-01

    textabstractBackground: Counseling in combination with pedometer use has proven to be effective in increasing physical activity and improving health outcomes. We investigated the cost-effectiveness of this intervention targeted at one million insufficiently active adults who visit their general

  9. Cost-effectiveness of interventions to reduce tobacco smoking in the Netherlands. An application of the RIVM Chronic Disease Model

    NARCIS (Netherlands)

    Feenstra TL; Baal PHM van; Hoogenveen RT; Vijgen SMC; Stolk E; Bemelmans WJE; PZO

    2006-01-01

    Introduction:Smoking is the most important single risk factor for mortality in the Netherlands and has been related to 12% of the burden of disease in Western Europe. Hence the Dutch Ministry of Health has asked to assess the cost-effectiveness of interventions to enhance smoking cessation in

  10. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Science.gov (United States)

    Salarzadeh Jenatabadi, Hashem; Moghavvemi, Sedigheh; Wan Mohamed Radzi, Che Wan Jasimah Bt; Babashamsi, Parastoo; Arashi, Mohammad

    2017-01-01

    Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data) were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  11. Testing students' e-learning via Facebook through Bayesian structural equation modeling.

    Directory of Open Access Journals (Sweden)

    Hashem Salarzadeh Jenatabadi

    Full Text Available Learning is an intentional activity, with several factors affecting students' intention to use new learning technology. Researchers have investigated technology acceptance in different contexts by developing various theories/models and testing them by a number of means. Although most theories/models developed have been examined through regression or structural equation modeling, Bayesian analysis offers more accurate data analysis results. To address this gap, the unified theory of acceptance and technology use in the context of e-learning via Facebook are re-examined in this study using Bayesian analysis. The data (S1 Data were collected from 170 students enrolled in a business statistics course at University of Malaya, Malaysia, and tested with the maximum likelihood and Bayesian approaches. The difference between the two methods' results indicates that performance expectancy and hedonic motivation are the strongest factors influencing the intention to use e-learning via Facebook. The Bayesian estimation model exhibited better data fit than the maximum likelihood estimator model. The results of the Bayesian and maximum likelihood estimator approaches are compared and the reasons for the result discrepancy are deliberated.

  12. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  13. Public health impact and cost effectiveness of mass vaccination with live attenuated human rotavirus vaccine (RIX4414) in India: model based analysis.

    Science.gov (United States)

    Rose, Johnie; Hawthorn, Rachael L; Watts, Brook; Singer, Mendel E

    2009-09-25

    To examine the public health impact of mass vaccination with live attenuated human rotavirus vaccine (RIX4414) in a birth cohort in India, and to estimate the cost effectiveness and affordability of such a programme. Decision analytical Markov model encompassing all direct medical costs. Infection risk and severity depended on age, number of previous infections, and vaccination history; probabilities of use of inpatient and outpatient health services depended on symptom severity. Published clinical, epidemiological, and economic data. When possible, parameter estimates were based on data specific for India. Population Simulated Indian birth cohort followed for five years. Decrease in rotavirus gastroenteritis episodes (non-severe and severe), deaths, outpatient visits, and admission to hospital; incremental cost effectiveness ratio of vaccination expressed as net cost in 2007 rupees per life year saved. In the base case, vaccination prevented 28,943 (29.7%) symptomatic episodes, 6981 (38.2%) severe episodes, 164 deaths (41.0%), 7178 (33.3%) outpatient visits, and 812 (34.3%) admissions to hospital per 100,000 children. Vaccination cost 8023 rupees (about pound100, euro113, $165) per life year saved, less than India's per capita gross domestic product, a common criterion for cost effectiveness. The net programme cost would be equivalent to 11.6% of the 2006-7 budget of the Indian Department of Health and Family Welfare. Model results were most sensitive to variations in access to outpatient care for those with severe symptoms. If this parameter was increased to its upper limit, the incremental cost effectiveness ratio for vaccination still fell between one and three times the per capita gross domestic product, meeting the World Health Organization's criterion for "cost effective" interventions. Uncertainty analysis indicated a 94.7% probability that vaccination would be cost effective according to a criterion of one times per capita gross domestic product per life

  14. Modelling the Impact and Cost-Effectiveness of Biomarker Tests as Compared with Pathogen-Specific Diagnostics in the Management of Undifferentiated Fever in Remote Tropical Settings.

    Directory of Open Access Journals (Sweden)

    Yoel Lubell

    Full Text Available Malaria accounts for a small fraction of febrile cases in increasingly large areas of the malaria endemic world. Point-of-care tests to improve the management of non-malarial fevers appropriate for primary care are few, consisting of either diagnostic tests for specific pathogens or testing for biomarkers of host response that indicate whether antibiotics might be required. The impact and cost-effectiveness of these approaches are relatively unexplored and methods to do so are not well-developed.We model the ability of dengue and scrub typhus rapid tests to inform antibiotic treatment, as compared with testing for elevated C-Reactive Protein (CRP, a biomarker of host-inflammation. Using data on causes of fever in rural Laos, we estimate the proportion of outpatients that would be correctly classified as requiring an antibiotic and the likely cost-effectiveness of the approaches.Use of either pathogen-specific test slightly increased the proportion of patients correctly classified as requiring antibiotics. CRP testing was consistently superior to the pathogen-specific tests, despite heterogeneity in causes of fever. All testing strategies are likely to result in higher average costs, but only the scrub typhus and CRP tests are likely to be cost-effective when considering direct health benefits, with median cost per disability adjusted life year averted of approximately $48 USD and $94 USD, respectively.Testing for viral infections is unlikely to be cost-effective when considering only direct health benefits to patients. Testing for prevalent bacterial pathogens can be cost-effective, having the benefit of informing not only whether treatment is required, but also as to the most appropriate antibiotic; this advantage, however, varies widely in response to heterogeneity in causes of fever. Testing for biomarkers of host inflammation is likely to be consistently cost-effective despite high heterogeneity, and can also offer substantial reductions in

  15. Modelling the Impact and Cost-Effectiveness of Biomarker Tests as Compared with Pathogen-Specific Diagnostics in the Management of Undifferentiated Fever in Remote Tropical Settings.

    Science.gov (United States)

    Lubell, Yoel; Althaus, Thomas; Blacksell, Stuart D; Paris, Daniel H; Mayxay, Mayfong; Pan-Ngum, Wirichada; White, Lisa J; Day, Nicholas P J; Newton, Paul N

    2016-01-01

    Malaria accounts for a small fraction of febrile cases in increasingly large areas of the malaria endemic world. Point-of-care tests to improve the management of non-malarial fevers appropriate for primary care are few, consisting of either diagnostic tests for specific pathogens or testing for biomarkers of host response that indicate whether antibiotics might be required. The impact and cost-effectiveness of these approaches are relatively unexplored and methods to do so are not well-developed. We model the ability of dengue and scrub typhus rapid tests to inform antibiotic treatment, as compared with testing for elevated C-Reactive Protein (CRP), a biomarker of host-inflammation. Using data on causes of fever in rural Laos, we estimate the proportion of outpatients that would be correctly classified as requiring an antibiotic and the likely cost-effectiveness of the approaches. Use of either pathogen-specific test slightly increased the proportion of patients correctly classified as requiring antibiotics. CRP testing was consistently superior to the pathogen-specific tests, despite heterogeneity in causes of fever. All testing strategies are likely to result in higher average costs, but only the scrub typhus and CRP tests are likely to be cost-effective when considering direct health benefits, with median cost per disability adjusted life year averted of approximately $48 USD and $94 USD, respectively. Testing for viral infections is unlikely to be cost-effective when considering only direct health benefits to patients. Testing for prevalent bacterial pathogens can be cost-effective, having the benefit of informing not only whether treatment is required, but also as to the most appropriate antibiotic; this advantage, however, varies widely in response to heterogeneity in causes of fever. Testing for biomarkers of host inflammation is likely to be consistently cost-effective despite high heterogeneity, and can also offer substantial reductions in over-use of

  16. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  17. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    Science.gov (United States)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  18. Bayesian Comparison of Alternative Graded Response Models for Performance Assessment Applications

    Science.gov (United States)

    Zhu, Xiaowen; Stone, Clement A.

    2012-01-01

    This study examined the relative effectiveness of Bayesian model comparison methods in selecting an appropriate graded response (GR) model for performance assessment applications. Three popular methods were considered: deviance information criterion (DIC), conditional predictive ordinate (CPO), and posterior predictive model checking (PPMC). Using…

  19. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    Science.gov (United States)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  20. Model-based cost-effectiveness analysis of B-type natriuretic peptide-guided care in patients with heart failure.

    Science.gov (United States)

    Mohiuddin, Syed; Reeves, Barnaby; Pufulete, Maria; Maishman, Rachel; Dayer, Mark; Macleod, John; McDonagh, Theresa; Purdy, Sarah; Rogers, Chris; Hollingworth, William

    2016-12-28

    Monitoring B-type natriuretic peptide (BNP) to guide pharmacotherapy might improve survival in patients with heart failure with reduced ejection fraction (HFrEF) or preserved ejection fraction (HFpEF). However, the cost-effectiveness of BNP-guided care is uncertain and guidelines do not uniformly recommend it. We assessed the cost-effectiveness of BNP-guided care in patient subgroups defined by age and ejection fraction. We used a Markov model<