WorldWideScience

Sample records for hierarchical bayesian approaches

  1. A Bayesian hierarchical approach to comparative audit for carotid surgery.

    Science.gov (United States)

    Kuhan, G; Marshall, E C; Abidia, A F; Chetter, I C; McCollum, P T

    2002-12-01

    the aim of this study was to illustrate how a Bayesian hierarchical modelling approach can aid the reliable comparison of outcome rates between surgeons. retrospective analysis of prospective and retrospective data. binary outcome data (death/stroke within 30 days), together with information on 15 possible risk factors specific for CEA were available on 836 CEAs performed by four vascular surgeons from 1992-99. The median patient age was 68 (range 38-86) years and 60% were men. the model was developed using the WinBUGS software. After adjusting for patient-level risk factors, a cross-validatory approach was adopted to identify "divergent" performance. A ranking exercise was also carried out. the overall observed 30-day stroke/death rate was 3.9% (33/836). The model found diabetes, stroke and heart disease to be significant risk factors. There was no significant difference between the predicted and observed outcome rates for any surgeon (Bayesian p -value>0.05). Each surgeon had a median rank of 3 with associated 95% CI 1.0-5.0, despite the variability of observed stroke/death rate from 2.9-4.4%. After risk adjustment, there was very little residual between-surgeon variability in outcome rate. Bayesian hierarchical models can help to accurately quantify the uncertainty associated with surgeons' performance and rank.

  2. Testing adaptive toolbox models: a Bayesian hierarchical approach.

    Science.gov (United States)

    Scheibehenne, Benjamin; Rieskamp, Jörg; Wagenmakers, Eric-Jan

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.

  3. Prediction of road accidents: A Bayesian hierarchical approach

    DEFF Research Database (Denmark)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T.

    2013-01-01

    the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link......In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson......-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks...

  4. Prediction of road accidents: A Bayesian hierarchical approach.

    Science.gov (United States)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T; Köhler, Jochen; Faber, Michael H

    2013-03-01

    In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models. Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions of the model response variables, conditional on the values of the risk indicating variables. The methodology is illustrated through a case study using data of the Austrian rural motorway network. In the case study, on randomly selected road segments the methodology is used to produce a model to predict the expected number of accidents in which an injury has occurred and the expected number of light, severe and fatally injured road users. Additionally, the methodology is used for geo-referenced identification of road sections with increased occurrence probabilities of injury accident events on a road link between two Austrian cities. It is shown that the proposed methodology can be used to develop models to estimate the occurrence of road accidents for any

  5. Bayesian nonparametric hierarchical modeling.

    Science.gov (United States)

    Dunson, David B

    2009-04-01

    In biomedical research, hierarchical models are very widely used to accommodate dependence in multivariate and longitudinal data and for borrowing of information across data from different sources. A primary concern in hierarchical modeling is sensitivity to parametric assumptions, such as linearity and normality of the random effects. Parametric assumptions on latent variable distributions can be challenging to check and are typically unwarranted, given available prior knowledge. This article reviews some recent developments in Bayesian nonparametric methods motivated by complex, multivariate and functional data collected in biomedical studies. The author provides a brief review of flexible parametric approaches relying on finite mixtures and latent class modeling. Dirichlet process mixture models are motivated by the need to generalize these approaches to avoid assuming a fixed finite number of classes. Focusing on an epidemiology application, the author illustrates the practical utility and potential of nonparametric Bayes methods.

  6. A hierarchical bayesian approach to ecological count data: a flexible tool for ecologists.

    Directory of Open Access Journals (Sweden)

    James A Fordyce

    Full Text Available Many ecological studies use the analysis of count data to arrive at biologically meaningful inferences. Here, we introduce a hierarchical bayesian approach to count data. This approach has the advantage over traditional approaches in that it directly estimates the parameters of interest at both the individual-level and population-level, appropriately models uncertainty, and allows for comparisons among models, including those that exceed the complexity of many traditional approaches, such as ANOVA or non-parametric analogs. As an example, we apply this method to oviposition preference data for butterflies in the genus Lycaeides. Using this method, we estimate the parameters that describe preference for each population, compare the preference hierarchies among populations, and explore various models that group populations that share the same preference hierarchy.

  7. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    Science.gov (United States)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  8. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  9. Multimethod, multistate Bayesian hierarchical modeling approach for use in regional monitoring of wolves.

    Science.gov (United States)

    Jiménez, José; García, Emilio J; Llaneza, Luis; Palacios, Vicente; González, Luis Mariano; García-Domínguez, Francisco; Múñoz-Igualada, Jaime; López-Bao, José Vicente

    2016-08-01

    In many cases, the first step in large-carnivore management is to obtain objective, reliable, and cost-effective estimates of population parameters through procedures that are reproducible over time. However, monitoring predators over large areas is difficult, and the data have a high level of uncertainty. We devised a practical multimethod and multistate modeling approach based on Bayesian hierarchical-site-occupancy models that combined multiple survey methods to estimate different population states for use in monitoring large predators at a regional scale. We used wolves (Canis lupus) as our model species and generated reliable estimates of the number of sites with wolf reproduction (presence of pups). We used 2 wolf data sets from Spain (Western Galicia in 2013 and Asturias in 2004) to test the approach. Based on howling surveys, the naïve estimation (i.e., estimate based only on observations) of the number of sites with reproduction was 9 and 25 sites in Western Galicia and Asturias, respectively. Our model showed 33.4 (SD 9.6) and 34.4 (3.9) sites with wolf reproduction, respectively. The number of occupied sites with wolf reproduction was 0.67 (SD 0.19) and 0.76 (0.11), respectively. This approach can be used to design more cost-effective monitoring programs (i.e., to define the sampling effort needed per site). Our approach should inspire well-coordinated surveys across multiple administrative borders and populations and lead to improved decision making for management of large carnivores on a landscape level. The use of this Bayesian framework provides a simple way to visualize the degree of uncertainty around population-parameter estimates and thus provides managers and stakeholders an intuitive approach to interpreting monitoring results. Our approach can be widely applied to large spatial scales in wildlife monitoring where detection probabilities differ between population states and where several methods are being used to estimate different population

  10. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    Science.gov (United States)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United

  11. How does aging affect recognition-based inference? A hierarchical Bayesian modeling approach.

    Science.gov (United States)

    Horn, Sebastian S; Pachur, Thorsten; Mata, Rui

    2015-01-01

    The recognition heuristic (RH) is a simple strategy for probabilistic inference according to which recognized objects are judged to score higher on a criterion than unrecognized objects. In this article, a hierarchical Bayesian extension of the multinomial r-model is applied to measure use of the RH on the individual participant level and to re-evaluate differences between younger and older adults' strategy reliance across environments. Further, it is explored how individual r-model parameters relate to alternative measures of the use of recognition and other knowledge, such as adherence rates and indices from signal-detection theory (SDT). Both younger and older adults used the RH substantially more often in an environment with high than low recognition validity, reflecting adaptivity in strategy use across environments. In extension of previous analyses (based on adherence rates), hierarchical modeling revealed that in an environment with low recognition validity, (a) older adults had a stronger tendency than younger adults to rely on the RH and (b) variability in RH use between individuals was larger than in an environment with high recognition validity; variability did not differ between age groups. Further, the r-model parameters correlated moderately with an SDT measure expressing how well people can discriminate cases where the RH leads to a correct vs. incorrect inference; this suggests that the r-model and the SDT measures may offer complementary insights into the use of recognition in decision making. In conclusion, younger and older adults are largely adaptive in their application of the RH, but cognitive aging may be associated with an increased tendency to rely on this strategy. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Science.gov (United States)

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  13. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    Directory of Open Access Journals (Sweden)

    Rasheda Arman Chowdhury

    Full Text Available Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG or Magneto-EncephaloGraphy (MEG signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i brain activity may be modeled using cortical parcels and (ii brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM and the Hierarchical Bayesian (HB source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2 to 30 cm(2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  14. An analysis of the costs of treating schizophrenia in Spain: a hierarchical Bayesian approach.

    Science.gov (United States)

    Vázquez-Polo, Francisco-Jose; Negrín, Miguel; Cabasés, Juan M; Sánchez, Eduardo; Haro, Joseph M; Salvador-Carulla, Luis

    2005-09-01

    Health care decisions should incorporate cost of illness and treatment data, particularly for disorders such as schizophrenia with a high morbidity rate and a disproportionately low allocation of resources. Previous cost of illness analyses may have disregarded geographical aspects relevant for resource consumption and unit cost calculation. To compare the utilisation of resources and the care costs of schizophrenic patients in four mental-health districts in Spain (in Madrid, Catalonia, Navarra and Andalusia), and to analyse factors that determine the costs and the differences between areas. A treated prevalence bottom-up three year follow-up design was used for obtaining data concerning socio-demography, clinical evolution and the utilisation of services. 1997 reference prices were updated for years 1998-2000 in euros. We propose two different scenarios, varying in the prices applied. In the first (Scenario 0) the reference prices are those obtained for a single geographic area, and so the cost variations are only due to differences in the use of resources. In the second situation (Scenario 1), we analyse the variations in resource utilisation at different levels, using the prices applicable to each healthcare area. Bayesian hierarchical models are used to discuss the factors that determine such costs and the differences between geographic areas. In scenario 0, the estimated mean cost was 4918.948 euros for the first year. In scenario 1 the highest cost was in Gava (Catalonia) and the lowest in Loja (Andalusia). Mean costs were respectively 4547.24 and 2473.98 euros. With respect to the evolution of costs over time, we observed an increase during the second year and a reduction during the third year. Geographical differences appeared in follow-up costs. The variables related to lower treatment costs were: residence in the family household, higher patient age and being in work. On the contrary, the number of relapses is directly related to higher treatment costs

  15. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    Science.gov (United States)

    2017-09-01

    application of statistical inference. Even when human forecasters leverage their professional experience, which is often gained through long periods of... application throughout statistics and Bayesian data analysis. The multivariate form of 2( , )  (e.g., Figure 12) is similarly analytically...data (i.e., no systematic manipulations with analytical functions), it is common in the statistical literature to apply mathematical transformations

  16. An Approach to Structure Determination and Estimation of Hierarchical Archimedean Copulas and its Application to Bayesian Classification

    Czech Academy of Sciences Publication Activity Database

    Górecki, J.; Hofert, M.; Holeňa, Martin

    2016-01-01

    Roč. 46, č. 1 (2016), s. 21-59 ISSN 0925-9902 R&D Projects: GA ČR GA13-17187S Grant - others:Slezská univerzita v Opavě(CZ) SGS/21/2014 Institutional support: RVO:67985807 Keywords : Copula * Hierarchical archimedean copula * Copula estimation * Structure determination * Kendall’s tau * Bayesian classification Subject RIV: IN - Informatics, Computer Science Impact factor: 1.294, year: 2016

  17. Estimation of Mental Disorders Prevalence in High School Students Using Small Area Methods: A Hierarchical Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Ali Reza Soltanian

    2016-08-01

    Full Text Available Background Adolescence is one of the most important periods in the course of human evolution and the prevalence of mental disorders among adolescence in different regions of Iran, especially in southern Iran. Objectives This study was conducted to determine the prevalence of mental disorders among high school students in Bushehr province, south of Iran. Methods In this cross-sectional study, 286 high school students were recruited by a multi-stage random sampling in Bushehr province in 2015. A general health questionnaire (GHQ-28 was used to assess mental disorders. The small area method, under the hierarchical Bayesian approach, was used to determine the prevalence of mental disorders and data analysis. Results From 286 questionnaires only 182 were completely filed and evaluated (the response rate was 70.5%. Of the students, 58.79% and 41.21% were male and female, respectively. Of all students, the prevalence of mental disorders in Bushehr, Dayyer, Deylam, Kangan, Dashtestan, Tangestan, Genaveh, and Dashty were 0.48, 0.42, 0.45, 0.52, 0.41, 0.47, 0.42, and 0.43, respectively. Conclusions Based on this study, the prevalence of mental disorders among adolescents was increasing in Bushehr Province counties. The lack of a national policy in this way is a serious obstacle to mental health and wellbeing access.

  18. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  19. Hierarchical Bayesian Models of Subtask Learning

    Science.gov (United States)

    Anglim, Jeromy; Wynton, Sarah K. A.

    2015-01-01

    The current study used Bayesian hierarchical methods to challenge and extend previous work on subtask learning consistency. A general model of individual-level subtask learning was proposed focusing on power and exponential functions with constraints to test for inconsistency. To study subtask learning, we developed a novel computer-based booking…

  20. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  1. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    Science.gov (United States)

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  2. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    .... Exploring these new developments, Bayesian Disease Mapping: Hierarchical Modeling in Spatial Epidemiology, Second Edition provides an up-to-date, cohesive account of the full range of Bayesian disease mapping methods and applications...

  3. A species-level phylogeny of all extant and late Quaternary extinct mammals using a novel heuristic-hierarchical Bayesian approach.

    Science.gov (United States)

    Faurby, Søren; Svenning, Jens-Christian

    2015-03-01

    Across large clades, two problems are generally encountered in the estimation of species-level phylogenies: (a) the number of taxa involved is generally so high that computation-intensive approaches cannot readily be utilized and (b) even for clades that have received intense study (e.g., mammals), attention has been centered on relatively few selected species, and most taxa must therefore be positioned on the basis of very limited genetic data. Here, we describe a new heuristic-hierarchical Bayesian approach and use it to construct a species-level phylogeny for all extant and late Quaternary extinct mammals. In this approach, species with large quantities of genetic data are placed nearly freely in the mammalian phylogeny according to these data, whereas the placement of species with lower quantities of data is performed with steadily stricter restrictions for decreasing data quantities. The advantages of the proposed method include (a) an improved ability to incorporate phylogenetic uncertainty in downstream analyses based on the resulting phylogeny, (b) a reduced potential for long-branch attraction or other types of errors that place low-data taxa far from their true position, while maintaining minimal restrictions for better-studied taxa, and (c) likely improved placement of low-data taxa due to the use of closer outgroups. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Science.gov (United States)

    Gerbino, Martina; Lattanzi, Massimiliano; Mena, Olga; Freese, Katherine

    2017-12-01

    We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i =mi /Mν (where the index i = 1 , 2 , 3 indicates the three mass eigenstates) carried by each of the mass eigenstates mi, after marginalizing over the (unknown) neutrino mass ordering, either normal ordering (NH) or inverted ordering (IH). The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameterhtype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB) measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO) measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4 : 3 from Planck temperature and large-scale polarization in combination with BAO (3 : 2 if small-scale polarization is also included). Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE) and BAO surveys (DESI) may determine the neutrino mass hierarchy at a high statistical

  5. Understanding uncertainties in non-linear population trajectories: a Bayesian semi-parametric hierarchical approach to large-scale surveys of coral cover.

    Directory of Open Access Journals (Sweden)

    Julie Vercelloni

    Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.

  6. Bayesian hierarchical modelling of North Atlantic windiness

    Science.gov (United States)

    Vanem, E.; Breivik, O. N.

    2013-03-01

    Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  7. Bayesian hierarchical modelling of North Atlantic windiness

    Directory of Open Access Journals (Sweden)

    E. Vanem

    2013-03-01

    Full Text Available Extreme weather conditions represent serious natural hazards to ship operations and may be the direct cause or contributing factor to maritime accidents. Such severe environmental conditions can be taken into account in ship design and operational windows can be defined that limits hazardous operations to less extreme conditions. Nevertheless, possible changes in the statistics of extreme weather conditions, possibly due to anthropogenic climate change, represent an additional hazard to ship operations that is less straightforward to account for in a consistent way. Obviously, there are large uncertainties as to how future climate change will affect the extreme weather conditions at sea and there is a need for stochastic models that can describe the variability in both space and time at various scales of the environmental conditions. Previously, Bayesian hierarchical space-time models have been developed to describe the variability and complex dependence structures of significant wave height in space and time. These models were found to perform reasonably well and provided some interesting results, in particular, pertaining to long-term trends in the wave climate. In this paper, a similar framework is applied to oceanic windiness and the spatial and temporal variability of the 10-m wind speed over an area in the North Atlantic ocean is investigated. When the results from the model for North Atlantic windiness is compared to the results for significant wave height over the same area, it is interesting to observe that whereas an increasing trend in significant wave height was identified, no statistically significant long-term trend was estimated in windiness. This may indicate that the increase in significant wave height is not due to an increase in locally generated wind waves, but rather to increased swell. This observation is also consistent with studies that have suggested a poleward shift of the main storm tracks.

  8. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    Science.gov (United States)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  9. Chemical purity using quantitative "1H-nuclear magnetic resonance: a hierarchical Bayesian approach for traceable calibrations

    International Nuclear Information System (INIS)

    Toman, Blaza; Nelson, Michael A.; Lippa, Katrice A.

    2016-01-01

    Chemical purity assessment using quantitative "1H-nuclear magnetic resonance spectroscopy is a method based on ratio references of mass and signal intensity of the analyte species to that of chemical standards of known purity. As such, it is an example of a calculation using a known measurement equation with multiple inputs. Though multiple samples are often analyzed during purity evaluations in order to assess measurement repeatability, the uncertainty evaluation must also account for contributions from inputs to the measurement equation. Furthermore, there may be other uncertainty components inherent in the experimental design, such as independent implementation of multiple calibration standards. As such, the uncertainty evaluation is not purely bottom up (based on the measurement equation) or top down (based on the experimental design), but inherently contains elements of both. This hybrid form of uncertainty analysis is readily implemented with Bayesian statistical analysis. In this article we describe this type of analysis in detail and illustrate it using data from an evaluation of chemical purity and its uncertainty for a folic acid material. (authors)

  10. A novel approach to quantifying the sensitivity of current and future cosmological datasets to the neutrino mass ordering through Bayesian hierarchical modeling

    Directory of Open Access Journals (Sweden)

    Martina Gerbino

    2017-12-01

    Full Text Available We present a novel approach to derive constraints on neutrino masses, as well as on other cosmological parameters, from cosmological data, while taking into account our ignorance of the neutrino mass ordering. We derive constraints from a combination of current as well as future cosmological datasets on the total neutrino mass Mν and on the mass fractions fν,i=mi/Mν (where the index i=1,2,3 indicates the three mass eigenstates carried by each of the mass eigenstates mi, after marginalizing over the (unknown neutrino mass ordering, either normal ordering (NH or inverted ordering (IH. The bounds on all the cosmological parameters, including those on the total neutrino mass, take therefore into account the uncertainty related to our ignorance of the mass hierarchy that is actually realized in nature. This novel approach is carried out in the framework of Bayesian analysis of a typical hierarchical problem, where the distribution of the parameters of the model depends on further parameters, the hyperparameters. In this context, the choice of the neutrino mass ordering is modeled via the discrete hyperparameter htype, which we introduce in the usual Markov chain analysis. The preference from cosmological data for either the NH or the IH scenarios is then simply encoded in the posterior distribution of the hyperparameter itself. Current cosmic microwave background (CMB measurements assign equal odds to the two hierarchies, and are thus unable to distinguish between them. However, after the addition of baryon acoustic oscillation (BAO measurements, a weak preference for the normal hierarchical scenario appears, with odds of 4:3 from Planck temperature and large-scale polarization in combination with BAO (3:2 if small-scale polarization is also included. Concerning next-generation cosmological experiments, forecasts suggest that the combination of upcoming CMB (COrE and BAO surveys (DESI may determine the neutrino mass hierarchy at a high

  11. Fully probabilistic design of hierarchical Bayesian models

    Czech Academy of Sciences Publication Activity Database

    Quinn, A.; Kárný, Miroslav; Guy, Tatiana Valentine

    2016-01-01

    Roč. 369, č. 1 (2016), s. 532-547 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Fully probabilistic design * Ideal distribution * Minimum cross-entropy principle * Bayesian conditioning * Kullback-Leibler divergence * Bayesian nonparametric modelling Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0463052.pdf

  12. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    Science.gov (United States)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  13. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  14. Hierarchical Bayesian modeling of the space - time diffusion patterns of cholera epidemic in Kumasi, Ghana

    NARCIS (Netherlands)

    Osei, Frank B.; Osei, F.B.; Duker, Alfred A.; Stein, A.

    2011-01-01

    This study analyses the joint effects of the two transmission routes of cholera on the space-time diffusion dynamics. Statistical models are developed and presented to investigate the transmission network routes of cholera diffusion. A hierarchical Bayesian modelling approach is employed for a joint

  15. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    Science.gov (United States)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream

  16. A Bayesian hierarchical model for demand curve analysis.

    Science.gov (United States)

    Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T

    2018-07-01

    Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.

  17. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Betancourt, M., E-mail: nsanders@cfa.harvard.edu [Department of Statistics, University of Warwick, Coventry CV4 7AL (United Kingdom)

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.

  18. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M.

    2015-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST

  19. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    Science.gov (United States)

    Houpt, Joseph W; Bittner, Jennifer L

    2018-05-10

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Bayesian Hierarchical Random Effects Models in Forensic Science

    Directory of Open Access Journals (Sweden)

    Colin G. G. Aitken

    2018-04-01

    Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  1. Bayesian Hierarchical Random Effects Models in Forensic Science.

    Science.gov (United States)

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  2. Road network safety evaluation using Bayesian hierarchical joint model.

    Science.gov (United States)

    Wang, Jie; Huang, Helai

    2016-05-01

    Safety and efficiency are commonly regarded as two significant performance indicators of transportation systems. In practice, road network planning has focused on road capacity and transport efficiency whereas the safety level of a road network has received little attention in the planning stage. This study develops a Bayesian hierarchical joint model for road network safety evaluation to help planners take traffic safety into account when planning a road network. The proposed model establishes relationships between road network risk and micro-level variables related to road entities and traffic volume, as well as socioeconomic, trip generation and network density variables at macro level which are generally used for long term transportation plans. In addition, network spatial correlation between intersections and their connected road segments is also considered in the model. A road network is elaborately selected in order to compare the proposed hierarchical joint model with a previous joint model and a negative binomial model. According to the results of the model comparison, the hierarchical joint model outperforms the joint model and negative binomial model in terms of the goodness-of-fit and predictive performance, which indicates the reasonableness of considering the hierarchical data structure in crash prediction and analysis. Moreover, both random effects at the TAZ level and the spatial correlation between intersections and their adjacent segments are found to be significant, supporting the employment of the hierarchical joint model as an alternative in road-network-level safety modeling as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A novel Bayesian hierarchical model for road safety hotspot prediction.

    Science.gov (United States)

    Fawcett, Lee; Thorpe, Neil; Matthews, Joseph; Kremer, Karsten

    2017-02-01

    In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation - commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period - to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our

  4. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  5. Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.

    Science.gov (United States)

    Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J

    2010-12-01

    Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies

  6. Introduction to Hierarchical Bayesian Modeling for Ecological Data

    CERN Document Server

    Parent, Eric

    2012-01-01

    Making statistical modeling and inference more accessible to ecologists and related scientists, Introduction to Hierarchical Bayesian Modeling for Ecological Data gives readers a flexible and effective framework to learn about complex ecological processes from various sources of data. It also helps readers get started on building their own statistical models. The text begins with simple models that progressively become more complex and realistic through explanatory covariates and intermediate hidden states variables. When fitting the models to data, the authors gradually present the concepts a

  7. Inferring on the Intentions of Others by Hierarchical Bayesian Learning

    Science.gov (United States)

    Diaconescu, Andreea O.; Mathys, Christoph; Weber, Lilian A. E.; Daunizeau, Jean; Kasper, Lars; Lomakina, Ekaterina I.; Fehr, Ernst; Stephan, Klaas E.

    2014-01-01

    Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to “player” or “adviser” roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition. PMID:25187943

  8. Bayesian Uncertainty Quantification for Subsurface Inversion Using a Multiscale Hierarchical Model

    KAUST Repository

    Mondal, Anirban

    2014-07-03

    We consider a Bayesian approach to nonlinear inverse problems in which the unknown quantity is a random field (spatial or temporal). The Bayesian approach contains a natural mechanism for regularization in the form of prior information, can incorporate information from heterogeneous sources and provide a quantitative assessment of uncertainty in the inverse solution. The Bayesian setting casts the inverse solution as a posterior probability distribution over the model parameters. The Karhunen-Loeve expansion is used for dimension reduction of the random field. Furthermore, we use a hierarchical Bayes model to inject multiscale data in the modeling framework. In this Bayesian framework, we show that this inverse problem is well-posed by proving that the posterior measure is Lipschitz continuous with respect to the data in total variation norm. Computational challenges in this construction arise from the need for repeated evaluations of the forward model (e.g., in the context of MCMC) and are compounded by high dimensionality of the posterior. We develop two-stage reversible jump MCMC that has the ability to screen the bad proposals in the first inexpensive stage. Numerical results are presented by analyzing simulated as well as real data from hydrocarbon reservoir. This article has supplementary material available online. © 2014 American Statistical Association and the American Society for Quality.

  9. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  10. Hierarchical Bayesian inference for ion channel screening dose-response data [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Ross H Johnstone

    2017-03-01

    Full Text Available Dose-response (or ‘concentration-effect’ relationships commonly occur in biological and pharmacological systems and are well characterised by Hill curves. These curves are described by an equation with two parameters: the inhibitory concentration 50% (IC50; and the Hill coefficient. Typically just the ‘best fit’ parameter values are reported in the literature. Here we introduce a Python-based software tool, PyHillFit , and describe the underlying Bayesian inference methods that it uses, to infer probability distributions for these parameters as well as the level of experimental observation noise. The tool also allows for hierarchical fitting, characterising the effect of inter-experiment variability. We demonstrate the use of the tool on a recently published dataset on multiple ion channel inhibition by multiple drug compounds. We compare the maximum likelihood, Bayesian and hierarchical Bayesian approaches. We then show how uncertainty in dose-response inputs can be characterised and propagated into a cardiac action potential simulation to give a probability distribution on model outputs.

  11. A hierarchical bayesian model to quantify uncertainty of stream water temperature forecasts.

    Directory of Open Access Journals (Sweden)

    Guillaume Bal

    Full Text Available Providing generic and cost effective modelling approaches to reconstruct and forecast freshwater temperature using predictors as air temperature and water discharge is a prerequisite to understanding ecological processes underlying the impact of water temperature and of global warming on continental aquatic ecosystems. Using air temperature as a simple linear predictor of water temperature can lead to significant bias in forecasts as it does not disentangle seasonality and long term trends in the signal. Here, we develop an alternative approach based on hierarchical Bayesian statistical time series modelling of water temperature, air temperature and water discharge using seasonal sinusoidal periodic signals and time varying means and amplitudes. Fitting and forecasting performances of this approach are compared with that of simple linear regression between water and air temperatures using i an emotive simulated example, ii application to three French coastal streams with contrasting bio-geographical conditions and sizes. The time series modelling approach better fit data and does not exhibit forecasting bias in long term trends contrary to the linear regression. This new model also allows for more accurate forecasts of water temperature than linear regression together with a fair assessment of the uncertainty around forecasting. Warming of water temperature forecast by our hierarchical Bayesian model was slower and more uncertain than that expected with the classical regression approach. These new forecasts are in a form that is readily usable in further ecological analyses and will allow weighting of outcomes from different scenarios to manage climate change impacts on freshwater wildlife.

  12. A Bayesian Hierarchical Model for Relating Multiple SNPs within Multiple Genes to Disease Risk

    Directory of Open Access Journals (Sweden)

    Lewei Duan

    2013-01-01

    Full Text Available A variety of methods have been proposed for studying the association of multiple genes thought to be involved in a common pathway for a particular disease. Here, we present an extension of a Bayesian hierarchical modeling strategy that allows for multiple SNPs within each gene, with external prior information at either the SNP or gene level. The model involves variable selection at the SNP level through latent indicator variables and Bayesian shrinkage at the gene level towards a prior mean vector and covariance matrix that depend on external information. The entire model is fitted using Markov chain Monte Carlo methods. Simulation studies show that the approach is capable of recovering many of the truly causal SNPs and genes, depending upon their frequency and size of their effects. The method is applied to data on 504 SNPs in 38 candidate genes involved in DNA damage response in the WECARE study of second breast cancers in relation to radiotherapy exposure.

  13. Bayesian Hierarchical Structure for Quantifying Population Variability to Inform Probabilistic Health Risk Assessments.

    Science.gov (United States)

    Shao, Kan; Allen, Bruce C; Wheeler, Matthew W

    2017-10-01

    Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations. © 2016 Society for Risk Analysis.

  14. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    International Nuclear Information System (INIS)

    Brown, Kristen A.; Harlim, John

    2013-01-01

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable

  15. Modelling the dynamics of an experimental host-pathogen microcosm within a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    David Lunn

    Full Text Available The advantages of Bayesian statistical approaches, such as flexibility and the ability to acknowledge uncertainty in all parameters, have made them the prevailing method for analysing the spread of infectious diseases in human or animal populations. We introduce a Bayesian approach to experimental host-pathogen systems that shares these attractive features. Since uncertainty in all parameters is acknowledged, existing information can be accounted for through prior distributions, rather than through fixing some parameter values. The non-linear dynamics, multi-factorial design, multiple measurements of responses over time and sampling error that are typical features of experimental host-pathogen systems can also be naturally incorporated. We analyse the dynamics of the free-living protozoan Paramecium caudatum and its specialist bacterial parasite Holospora undulata. Our analysis provides strong evidence for a saturable infection function, and we were able to reproduce the two waves of infection apparent in the data by separating the initial inoculum from the parasites released after the first cycle of infection. In addition, the parameter estimates from the hierarchical model can be combined to infer variations in the parasite's basic reproductive ratio across experimental groups, enabling us to make predictions about the effect of resources and host genotype on the ability of the parasite to spread. Even though the high level of variability between replicates limited the resolution of the results, this Bayesian framework has strong potential to be used more widely in experimental ecology.

  16. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    Science.gov (United States)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  17. Bayesian hierarchical models for smoothing in two-phase studies, with application to small area estimation.

    Science.gov (United States)

    Ross, Michelle; Wakefield, Jon

    2015-10-01

    Two-phase study designs are appealing since they allow for the oversampling of rare sub-populations which improves efficiency. In this paper we describe a Bayesian hierarchical model for the analysis of two-phase data. Such a model is particularly appealing in a spatial setting in which random effects are introduced to model between-area variability. In such a situation, one may be interested in estimating regression coefficients or, in the context of small area estimation, in reconstructing the population totals by strata. The efficiency gains of the two-phase sampling scheme are compared to standard approaches using 2011 birth data from the research triangle area of North Carolina. We show that the proposed method can overcome small sample difficulties and improve on existing techniques. We conclude that the two-phase design is an attractive approach for small area estimation.

  18. Particle identification in ALICE: a Bayesian approach

    NARCIS (Netherlands)

    Adam, J.; Adamova, D.; Aggarwal, M. M.; Rinella, G. Aglieri; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anticic, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshaeuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badala, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Balasubramanian, S.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnafoeldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Bathen, B.; Batigne, G.; Camejo, A. Batista; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Benacek, P.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielcik, J.; Bielcikova, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Boggild, H.; Boldizsar, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossu, F.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Cai, X.; Caines, H.; Diaz, L. Calero; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castellanos, J. Castillo; Castro, A. J.; Casula, E. A. R.; Sanchez, C. Ceballos; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Barroso, V. Chibante; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Balbastre, G. Conesa; del Valle, Z. Conesa; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Morales, Y. Corrales; Cortes Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Denes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Corchero, M. A. Diaz; Dietel, T.; Dillenseger, P.; Divia, R.; Djuvsland, O.; Dobrin, A.; Gimenez, D. Domenicis; Doenigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernandez Tellez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Girard, M. Fusco; Gaardhoje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glaessel, P.; Gomez Coral, D. M.; Ramirez, A. Gomez; Gonzalez, A. S.; Gonzalez, V.; Gonzalez-Zamora, P.; Gorbunov, S.; Goerlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Haake, R.; Haaland, O.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbaer, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Horak, D.; Hosokawa, R.; Hristov, P.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Incani, E.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacazio, N.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Bustamante, R. T. Jimenez; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kaplin, V.; Kar, S.; Uysal, A. Karasu; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Khan, M. Mohisin; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, J. S.; Kim, M.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein-Boesing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kostarakis, P.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Meethaleveedu, G. Koyithatta; Kralik, I.; Kravcakova, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kucera, V.; Kuijer, P. G.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; Monzon, I. Leon; Leon Vargas, H.; Leoncino, M.; Levai, P.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; Torres, E. Lopez; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mares, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marin, A.; Markert, C.; Marquard, M.; Martin, N. A.; Blanco, J. Martin; Martinengo, P.; Martinez, M. I.; Garcia, G. Martinez; Pedreira, M. Martinez; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Perez, J. Mercado; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miskowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montano Zetina, L.; Montes, E.; De Godoy, D. A. Moreira; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Muehlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, D.; Pagano, P.; Paic, G.; Pal, S. K.; Pan, J.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Da Costa, H. Pereira; Peresunko, D.; Lara, C. E. Perez; Lezama, E. Perez; Peskov, V.; Pestov, Y.; Petracek, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Ploskon, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Raesaenen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodriguez Cahuantzi, M.; Manso, A. Rodriguez; Roed, K.; Rogochaya, E.; Rohr, D.; Roehrich, D.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Montero, A. J. Rubio; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Safarik, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Sarkar, N.; Sarma, P.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Sefcik, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shahzad, M. I.; Shangaraev, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; de Souza, R. D.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Sumbera, M.; Sumowidagdo, S.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Munoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thaeder, J.; Thakur, D.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; Palomo, L. Valencia; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vyvre, P. Vande; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Vercellin, E.; Vergara Limon, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Baillie, O. Villalobos; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Voelkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrlakova, J.; Vulpescu, B.; Wagner, B.; Wagner, J.; Wang, H.; Watanabe, D.; Watanabe, Y.; Weiser, D. F.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yang, H.; Yano, S.; Yasin, Z.; Yokoyama, H.; Yoo, I. -K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Zavada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, C.; Zhao, C.; Zhigareva, N.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.

    2016-01-01

    We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian

  19. Bayesian disease mapping: hierarchical modeling in spatial epidemiology

    National Research Council Canada - National Science Library

    Lawson, Andrew

    2013-01-01

    Since the publication of the first edition, many new Bayesian tools and methods have been developed for space-time data analysis, the predictive modeling of health outcomes, and other spatial biostatistical areas...

  20. A Bayesian Hierarchical Model for Glacial Dynamics Based on the Shallow Ice Approximation and its Evaluation Using Analytical Solutions

    Science.gov (United States)

    Gopalan, Giri; Hrafnkelsson, Birgir; Aðalgeirsdóttir, Guðfinna; Jarosch, Alexander H.; Pálsson, Finnur

    2018-03-01

    Bayesian hierarchical modeling can assist the study of glacial dynamics and ice flow properties. This approach will allow glaciologists to make fully probabilistic predictions for the thickness of a glacier at unobserved spatio-temporal coordinates, and it will also allow for the derivation of posterior probability distributions for key physical parameters such as ice viscosity and basal sliding. The goal of this paper is to develop a proof of concept for a Bayesian hierarchical model constructed, which uses exact analytical solutions for the shallow ice approximation (SIA) introduced by Bueler et al. (2005). A suite of test simulations utilizing these exact solutions suggests that this approach is able to adequately model numerical errors and produce useful physical parameter posterior distributions and predictions. A byproduct of the development of the Bayesian hierarchical model is the derivation of a novel finite difference method for solving the SIA partial differential equation (PDE). An additional novelty of this work is the correction of numerical errors induced through a numerical solution using a statistical model. This error correcting process models numerical errors that accumulate forward in time and spatial variation of numerical errors between the dome, interior, and margin of a glacier.

  1. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    Science.gov (United States)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  2. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  3. A Hierarchical Bayesian Model to Predict Self-Thinning Line for Chinese Fir in Southern China.

    Directory of Open Access Journals (Sweden)

    Xiongqing Zhang

    Full Text Available Self-thinning is a dynamic equilibrium between forest growth and mortality at full site occupancy. Parameters of the self-thinning lines are often confounded by differences across various stand and site conditions. For overcoming the problem of hierarchical and repeated measures, we used hierarchical Bayesian method to estimate the self-thinning line. The results showed that the self-thinning line for Chinese fir (Cunninghamia lanceolata (Lamb.Hook. plantations was not sensitive to the initial planting density. The uncertainty of model predictions was mostly due to within-subject variability. The simulation precision of hierarchical Bayesian method was better than that of stochastic frontier function (SFF. Hierarchical Bayesian method provided a reasonable explanation of the impact of other variables (site quality, soil type, aspect, etc. on self-thinning line, which gave us the posterior distribution of parameters of self-thinning line. The research of self-thinning relationship could be benefit from the use of hierarchical Bayesian method.

  4. Using hierarchical Bayesian methods to examine the tools of decision-making

    OpenAIRE

    Michael D. Lee; Benjamin R. Newell

    2011-01-01

    Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants). Two worked-examples show how hierarchical models can be developed to account for and ...

  5. A Bayesian approach to model uncertainty

    International Nuclear Information System (INIS)

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  6. DUST SPECTRAL ENERGY DISTRIBUTIONS IN THE ERA OF HERSCHEL AND PLANCK: A HIERARCHICAL BAYESIAN-FITTING TECHNIQUE

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Goodman, Alyssa A.; Shetty, Rahul; Stutz, Amelia M.; Launhardt, Ralf; Kauffmann, Jens

    2012-01-01

    We present a hierarchical Bayesian method for fitting infrared spectral energy distributions (SEDs) of dust emission to observed fluxes. Under the standard assumption of optically thin single temperature (T) sources, the dust SED as represented by a power-law-modified blackbody is subject to a strong degeneracy between T and the spectral index β. The traditional non-hierarchical approaches, typically based on χ 2 minimization, are severely limited by this degeneracy, as it produces an artificial anti-correlation between T and β even with modest levels of observational noise. The hierarchical Bayesian method rigorously and self-consistently treats measurement uncertainties, including calibration and noise, resulting in more precise SED fits. As a result, the Bayesian fits do not produce any spurious anti-correlations between the SED parameters due to measurement uncertainty. We demonstrate that the Bayesian method is substantially more accurate than the χ 2 fit in recovering the SED parameters, as well as the correlations between them. As an illustration, we apply our method to Herschel and submillimeter ground-based observations of the star-forming Bok globule CB244. This source is a small, nearby molecular cloud containing a single low-mass protostar and a starless core. We find that T and β are weakly positively correlated—in contradiction with the χ 2 fits, which indicate a T-β anti-correlation from the same data set. Additionally, in comparison to the χ 2 fits the Bayesian SED parameter estimates exhibit a reduced range in values.

  7. A Hierarchical Bayesian Setting for an Inverse Problem in Linear Parabolic PDEs with Noisy Boundary Conditions

    KAUST Repository

    Ruggeri, Fabrizio

    2016-05-12

    In this work we develop a Bayesian setting to infer unknown parameters in initial-boundary value problems related to linear parabolic partial differential equations. We realistically assume that the boundary data are noisy, for a given prescribed initial condition. We show how to derive the joint likelihood function for the forward problem, given some measurements of the solution field subject to Gaussian noise. Given Gaussian priors for the time-dependent Dirichlet boundary values, we analytically marginalize the joint likelihood using the linearity of the equation. Our hierarchical Bayesian approach is fully implemented in an example that involves the heat equation. In this example, the thermal diffusivity is the unknown parameter. We assume that the thermal diffusivity parameter can be modeled a priori through a lognormal random variable or by means of a space-dependent stationary lognormal random field. Synthetic data are used to test the inference. We exploit the behavior of the non-normalized log posterior distribution of the thermal diffusivity. Then, we use the Laplace method to obtain an approximated Gaussian posterior and therefore avoid costly Markov Chain Monte Carlo computations. Expected information gains and predictive posterior densities for observable quantities are numerically estimated using Laplace approximation for different experimental setups.

  8. The Bayesian Approach to Association

    Science.gov (United States)

    Arora, N. S.

    2017-12-01

    The Bayesian approach to Association focuses mainly on quantifying the physics of the domain. In the case of seismic association for instance let X be the set of all significant events (above some threshold) and their attributes, such as location, time, and magnitude, Y1 be the set of detections that are caused by significant events and their attributes such as seismic phase, arrival time, amplitude etc., Y2 be the set of detections that are not caused by significant events, and finally Y be the set of observed detections We would now define the joint distribution P(X, Y1, Y2, Y) = P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2) ; where the last term simply states that Y1 and Y2 are a partitioning of Y. Given the above joint distribution the inference problem is simply to find the X, Y1, and Y2 that maximizes posterior probability P(X, Y1, Y2| Y) which reduces to maximizing P(X) P(Y1 | X) P(Y2) I(Y = Y1 + Y2). In this expression P(X) captures our prior belief about event locations. P(Y1 | X) captures notions of travel time, residual error distributions as well as detection and mis-detection probabilities. While P(Y2) captures the false detection rate of our seismic network. The elegance of this approach is that all of the assumptions are stated clearly in the model for P(X), P(Y1|X) and P(Y2). The implementation of the inference is merely a by-product of this model. In contrast some of the other methods such as GA hide a number of assumptions in the implementation details of the inference - such as the so called "driver cells." The other important aspect of this approach is that all seismic knowledge including knowledge from other domains such as infrasound and hydroacoustic can be included in the same model. So, we don't need to separately account for misdetections or merge seismic and infrasound events as a separate step. Finally, it should be noted that the objective of automatic association is to simplify the job of humans who are publishing seismic bulletins based on this

  9. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  10. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  11. A hierarchical Bayesian spatio-temporal model to forecast trapped particle fluxes over the SAA region

    Czech Academy of Sciences Publication Activity Database

    Suparta, W.; Gusrizal, G.; Kudela, Karel; Isa, Z.

    2017-01-01

    Roč. 28, č. 3 (2017), s. 357-370 ISSN 1017-0839 R&D Projects: GA MŠk EF15_003/0000481 Institutional support: RVO:61389005 Keywords : trapped particle * spatio-temporal * hierarchical Bayesian * forecasting Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 0.752, year: 2016

  12. Hierarchical Bayesian Markov switching models with application to predicting spawning success of shovelnose sturgeon

    Science.gov (United States)

    Holan, S.H.; Davis, G.M.; Wildhaber, M.L.; DeLonay, A.J.; Papoulias, D.M.

    2009-01-01

    The timing of spawning in fish is tightly linked to environmental factors; however, these factors are not very well understood for many species. Specifically, little information is available to guide recruitment efforts for endangered species such as the sturgeon. Therefore, we propose a Bayesian hierarchical model for predicting the success of spawning of the shovelnose sturgeon which uses both biological and behavioural (longitudinal) data. In particular, we use data that were produced from a tracking study that was conducted in the Lower Missouri River. The data that were produced from this study consist of biological variables associated with readiness to spawn along with longitudinal behavioural data collected by using telemetry and archival data storage tags. These high frequency data are complex both biologically and in the underlying behavioural process. To accommodate such complexity we developed a hierarchical linear regression model that uses an eigenvalue predictor, derived from the transition probability matrix of a two-state Markov switching model with generalized auto-regressive conditional heteroscedastic dynamics. Finally, to minimize the computational burden that is associated with estimation of this model, a parallel computing approach is proposed. ?? Journal compilation 2009 Royal Statistical Society.

  13. MCMC for parameters estimation by bayesian approach

    International Nuclear Information System (INIS)

    Ait Saadi, H.; Ykhlef, F.; Guessoum, A.

    2011-01-01

    This article discusses the parameter estimation for dynamic system by a Bayesian approach associated with Markov Chain Monte Carlo methods (MCMC). The MCMC methods are powerful for approximating complex integrals, simulating joint distributions, and the estimation of marginal posterior distributions, or posterior means. The MetropolisHastings algorithm has been widely used in Bayesian inference to approximate posterior densities. Calibrating the proposal distribution is one of the main issues of MCMC simulation in order to accelerate the convergence.

  14. Clustering and Bayesian hierarchical modeling for the definition of informative prior distributions in hydrogeology

    Science.gov (United States)

    Cucchi, K.; Kawa, N.; Hesse, F.; Rubin, Y.

    2017-12-01

    In order to reduce uncertainty in the prediction of subsurface flow and transport processes, practitioners should use all data available. However, classic inverse modeling frameworks typically only make use of information contained in in-situ field measurements to provide estimates of hydrogeological parameters. Such hydrogeological information about an aquifer is difficult and costly to acquire. In this data-scarce context, the transfer of ex-situ information coming from previously investigated sites can be critical for improving predictions by better constraining the estimation procedure. Bayesian inverse modeling provides a coherent framework to represent such ex-situ information by virtue of the prior distribution and combine them with in-situ information from the target site. In this study, we present an innovative data-driven approach for defining such informative priors for hydrogeological parameters at the target site. Our approach consists in two steps, both relying on statistical and machine learning methods. The first step is data selection; it consists in selecting sites similar to the target site. We use clustering methods for selecting similar sites based on observable hydrogeological features. The second step is data assimilation; it consists in assimilating data from the selected similar sites into the informative prior. We use a Bayesian hierarchical model to account for inter-site variability and to allow for the assimilation of multiple types of site-specific data. We present the application and validation of the presented methods on an established database of hydrogeological parameters. Data and methods are implemented in the form of an open-source R-package and therefore facilitate easy use by other practitioners.

  15. Combining information from multiple flood projections in a hierarchical Bayesian framework

    Science.gov (United States)

    Le Vine, Nataliya

    2016-04-01

    This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  16. Merging information from multi-model flood projections in a hierarchical Bayesian framework

    Science.gov (United States)

    Le Vine, Nataliya

    2016-04-01

    Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.

  17. Comparison between Fisherian and Bayesian approach to ...

    African Journals Online (AJOL)

    ... of its simplicity and optimality properties is normally used for two group cases. However, Bayesian approach is found to be better than Fisher's approach because of its low misclassification error rate. Keywords: variance-covariance matrices, centroids, prior probability, mahalanobis distance, probability of misclassification ...

  18. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  19. Constraining mass anomalies in the interior of spherical bodies using Trans-dimensional Bayesian Hierarchical inference.

    Science.gov (United States)

    Izquierdo, K.; Lekic, V.; Montesi, L.

    2017-12-01

    Gravity inversions are especially important for planetary applications since measurements of the variations in gravitational acceleration are often the only constraint available to map out lateral density variations in the interiors of planets and other Solar system objects. Currently, global gravity data is available for the terrestrial planets and the Moon. Although several methods for inverting these data have been developed and applied, the non-uniqueness of global density models that fit the data has not yet been fully characterized. We make use of Bayesian inference and a Reversible Jump Markov Chain Monte Carlo (RJMCMC) approach to develop a Trans-dimensional Hierarchical Bayesian (THB) inversion algorithm that yields a large sample of models that fit a gravity field. From this group of models, we can determine the most likely value of parameters of a global density model and a measure of the non-uniqueness of each parameter when the number of anomalies describing the gravity field is not fixed a priori. We explore the use of a parallel tempering algorithm and fast multipole method to reduce the number of iterations and computing time needed. We applied this method to a synthetic gravity field of the Moon and a long wavelength synthetic model of density anomalies in the Earth's lower mantle. We obtained a good match between the given gravity field and the gravity field produced by the most likely model in each inversion. The number of anomalies of the models showed parsimony of the algorithm, the value of the noise variance of the input data was retrieved, and the non-uniqueness of the models was quantified. Our results show that the ability to constrain the latitude and longitude of density anomalies, which is excellent at shallow locations (information about the overall density distribution of celestial bodies even when there is no other geophysical data available.

  20. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  1. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    Science.gov (United States)

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  2. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  3. Bayesian approach and application to operation safety

    International Nuclear Information System (INIS)

    Procaccia, H.; Suhner, M.Ch.

    2003-01-01

    The management of industrial risks requires the development of statistical and probabilistic analyses which use all the available convenient information in order to compensate the insufficient experience feedback in a domain where accidents and incidents remain too scarce to perform a classical statistical frequency analysis. The Bayesian decision approach is well adapted to this problem because it integrates both the expertise and the experience feedback. The domain of knowledge is widen, the forecasting study becomes possible and the decisions-remedial actions are strengthen thanks to risk-cost-benefit optimization analyzes. This book presents the bases of the Bayesian approach and its concrete applications in various industrial domains. After a mathematical presentation of the industrial operation safety concepts and of the Bayesian approach principles, this book treats of some of the problems that can be solved thanks to this approach: softwares reliability, controls linked with the equipments warranty, dynamical updating of databases, expertise modeling and weighting, Bayesian optimization in the domains of maintenance, quality control, tests and design of new equipments. A synthesis of the mathematical formulae used in this approach is given in conclusion. (J.S.)

  4. Bayesian Hierarchical Distributed Lag Models for Summer Ozone Exposure and Cardio-Respiratory Mortality

    OpenAIRE

    Yi Huang; Francesca Dominici; Michelle Bell

    2004-01-01

    In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP ...

  5. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  6. Hierarchical Bayesian Analysis of Biased Beliefs and Distributional Other-Regarding Preferences

    Directory of Open Access Journals (Sweden)

    Jeroen Weesie

    2013-02-01

    Full Text Available This study investigates the relationship between an actor’s beliefs about others’ other-regarding (social preferences and her own other-regarding preferences, using an “avant-garde” hierarchical Bayesian method. We estimate two distributional other-regarding preference parameters, α and β, of actors using incentivized choice data in binary Dictator Games. Simultaneously, we estimate the distribution of actors’ beliefs about others α and β, conditional on actors’ own α and β, with incentivized belief elicitation. We demonstrate the benefits of the Bayesian method compared to it’s hierarchical frequentist counterparts. Results show a positive association between an actor’s own (α; β and her beliefs about average(α; β in the population. The association between own preferences and the variance in beliefs about others’ preferences in the population, however, is curvilinear for α and insignificant for β. These results are partially consistent with the cone effect [1,2] which is described in detail below. Because in the Bayesian-Nash equilibrium concept, beliefs and own preferences are assumed to be independent, these results cast doubt on the application of the Bayesian-Nash equilibrium concept to experimental data.

  7. Hierarchical Bayesian models to assess between- and within-batch variability of pathogen contamination in food.

    Science.gov (United States)

    Commeau, Natalie; Cornu, Marie; Albert, Isabelle; Denis, Jean-Baptiste; Parent, Eric

    2012-03-01

    Assessing within-batch and between-batch variability is of major interest for risk assessors and risk managers in the context of microbiological contamination of food. For example, the ratio between the within-batch variability and the between-batch variability has a large impact on the results of a sampling plan. Here, we designed hierarchical Bayesian models to represent such variability. Compatible priors were built mathematically to obtain sound model comparisons. A numeric criterion is proposed to assess the contamination structure comparing the ability of the models to replicate grouped data at the batch level using a posterior predictive loss approach. Models were applied to two case studies: contamination by Listeria monocytogenes of pork breast used to produce diced bacon and contamination by the same microorganism on cold smoked salmon at the end of the process. In the first case study, a contamination structure clearly exists and is located at the batch level, that is, between batches variability is relatively strong, whereas in the second a structure also exists but is less marked. © 2012 Society for Risk Analysis.

  8. Modeling when people quit: Bayesian censored geometric models with hierarchical and latent-mixture extensions.

    Science.gov (United States)

    Okada, Kensuke; Vandekerckhove, Joachim; Lee, Michael D

    2018-02-01

    People often interact with environments that can provide only a finite number of items as resources. Eventually a book contains no more chapters, there are no more albums available from a band, and every Pokémon has been caught. When interacting with these sorts of environments, people either actively choose to quit collecting new items, or they are forced to quit when the items are exhausted. Modeling the distribution of how many items people collect before they quit involves untangling these two possibilities, We propose that censored geometric models are a useful basic technique for modeling the quitting distribution, and, show how, by implementing these models in a hierarchical and latent-mixture framework through Bayesian methods, they can be extended to capture the additional features of specific situations. We demonstrate this approach by developing and testing a series of models in two case studies involving real-world data. One case study deals with people choosing jokes from a recommender system, and the other deals with people completing items in a personality survey.

  9. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  10. An economic growth model based on financial credits distribution to the government economy priority sectors of each regency in Indonesia using hierarchical Bayesian method

    Science.gov (United States)

    Yasmirullah, Septia Devi Prihastuti; Iriawan, Nur; Sipayung, Feronika Rosalinda

    2017-11-01

    The success of regional economic establishment could be measured by economic growth. Since the Act No. 32 of 2004 has been implemented, unbalance economic among the regency in Indonesia is increasing. This condition is contrary different with the government goal to build society welfare through the economic activity development in each region. This research aims to examine economic growth through the distribution of bank credits to each Indonesia's regency. The data analyzed in this research is hierarchically structured data which follow normal distribution in first level. Two modeling approaches are employed in this research, a global-one level Bayesian approach and two-level hierarchical Bayesian approach. The result shows that hierarchical Bayesian has succeeded to demonstrate a better estimation than a global-one level Bayesian. It proves that the different economic growth in each province is significantly influenced by the variations of micro level characteristics in each province. These variations are significantly affected by cities and province characteristics in second level.

  11. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    Science.gov (United States)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  12. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  13. Risk Assessment for Mobile Systems Through a Multilayered Hierarchical Bayesian Network.

    Science.gov (United States)

    Li, Shancang; Tryfonas, Theo; Russell, Gordon; Andriotis, Panagiotis

    2016-08-01

    Mobile systems are facing a number of application vulnerabilities that can be combined together and utilized to penetrate systems with devastating impact. When assessing the overall security of a mobile system, it is important to assess the security risks posed by each mobile applications (apps), thus gaining a stronger understanding of any vulnerabilities present. This paper aims at developing a three-layer framework that assesses the potential risks which apps introduce within the Android mobile systems. A Bayesian risk graphical model is proposed to evaluate risk propagation in a layered risk architecture. By integrating static analysis, dynamic analysis, and behavior analysis in a hierarchical framework, the risks and their propagation through each layer are well modeled by the Bayesian risk graph, which can quantitatively analyze risks faced to both apps and mobile systems. The proposed hierarchical Bayesian risk graph model offers a novel way to investigate the security risks in mobile environment and enables users and administrators to evaluate the potential risks. This strategy allows to strengthen both app security as well as the security of the entire system.

  14. Topics in Computational Bayesian Statistics With Applications to Hierarchical Models in Astronomy and Sociology

    Science.gov (United States)

    Sahai, Swupnil

    This thesis includes three parts. The overarching theme is how to analyze structured hierarchical data, with applications to astronomy and sociology. The first part discusses how expectation propagation can be used to parallelize the computation when fitting big hierarchical bayesian models. This methodology is then used to fit a novel, nonlinear mixture model to ultraviolet radiation from various regions of the observable universe. The second part discusses how the Stan probabilistic programming language can be used to numerically integrate terms in a hierarchical bayesian model. This technique is demonstrated on supernovae data to significantly speed up convergence to the posterior distribution compared to a previous study that used a Gibbs-type sampler. The third part builds a formal latent kernel representation for aggregate relational data as a way to more robustly estimate the mixing characteristics of agents in a network. In particular, the framework is applied to sociology surveys to estimate, as a function of ego age, the age and sex composition of the personal networks of individuals in the United States.

  15. Particle identification in ALICE: a Bayesian approach

    CERN Document Server

    Adam, Jaroslav; Aggarwal, Madan Mohan; Aglieri Rinella, Gianluca; Agnello, Michelangelo; Agrawal, Neelima; Ahammed, Zubayer; Ahmad, Shakeel; Ahn, Sang Un; Aiola, Salvatore; Akindinov, Alexander; Alam, Sk Noor; Silva De Albuquerque, Danilo; Aleksandrov, Dmitry; Alessandro, Bruno; Alexandre, Didier; Alfaro Molina, Jose Ruben; Alici, Andrea; Alkin, Anton; Millan Almaraz, Jesus Roberto; Alme, Johan; Alt, Torsten; Altinpinar, Sedat; Altsybeev, Igor; Alves Garcia Prado, Caio; Andrei, Cristian; Andronic, Anton; Anguelov, Venelin; Anticic, Tome; Antinori, Federico; Antonioli, Pietro; Aphecetche, Laurent Bernard; Appelshaeuser, Harald; Arcelli, Silvia; Arnaldi, Roberta; Arnold, Oliver Werner; Arsene, Ionut Cristian; Arslandok, Mesut; Audurier, Benjamin; Augustinus, Andre; Averbeck, Ralf Peter; Azmi, Mohd Danish; Badala, Angela; Baek, Yong Wook; Bagnasco, Stefano; Bailhache, Raphaelle Marie; Bala, Renu; Balasubramanian, Supraja; Baldisseri, Alberto; Baral, Rama Chandra; Barbano, Anastasia Maria; Barbera, Roberto; Barile, Francesco; Barnafoldi, Gergely Gabor; Barnby, Lee Stuart; Ramillien Barret, Valerie; Bartalini, Paolo; Barth, Klaus; Bartke, Jerzy Gustaw; Bartsch, Esther; Basile, Maurizio; Bastid, Nicole; Basu, Sumit; Bathen, Bastian; Batigne, Guillaume; Batista Camejo, Arianna; Batyunya, Boris; Batzing, Paul Christoph; Bearden, Ian Gardner; Beck, Hans; Bedda, Cristina; Behera, Nirbhay Kumar; Belikov, Iouri; Bellini, Francesca; Bello Martinez, Hector; Bellwied, Rene; Belmont Iii, Ronald John; Belmont Moreno, Ernesto; Belyaev, Vladimir; Benacek, Pavel; Bencedi, Gyula; Beole, Stefania; Berceanu, Ionela; Bercuci, Alexandru; Berdnikov, Yaroslav; Berenyi, Daniel; Bertens, Redmer Alexander; Berzano, Dario; Betev, Latchezar; Bhasin, Anju; Bhat, Inayat Rasool; Bhati, Ashok Kumar; Bhattacharjee, Buddhadeb; Bhom, Jihyun; Bianchi, Livio; Bianchi, Nicola; Bianchin, Chiara; Bielcik, Jaroslav; Bielcikova, Jana; Bilandzic, Ante; Biro, Gabor; Biswas, Rathijit; Biswas, Saikat; Bjelogrlic, Sandro; Blair, Justin Thomas; Blau, Dmitry; Blume, Christoph; Bock, Friederike; Bogdanov, Alexey; Boggild, Hans; Boldizsar, Laszlo; Bombara, Marek; Book, Julian Heinz; Borel, Herve; Borissov, Alexander; Borri, Marcello; Bossu, Francesco; Botta, Elena; Bourjau, Christian; Braun-Munzinger, Peter; Bregant, Marco; Breitner, Timo Gunther; Broker, Theo Alexander; Browning, Tyler Allen; Broz, Michal; Brucken, Erik Jens; Bruna, Elena; Bruno, Giuseppe Eugenio; Budnikov, Dmitry; Buesching, Henner; Bufalino, Stefania; Buncic, Predrag; Busch, Oliver; Buthelezi, Edith Zinhle; Bashir Butt, Jamila; Buxton, Jesse Thomas; Cabala, Jan; Caffarri, Davide; Cai, Xu; Caines, Helen Louise; Calero Diaz, Liliet; Caliva, Alberto; Calvo Villar, Ernesto; Camerini, Paolo; Carena, Francesco; Carena, Wisla; Carnesecchi, Francesca; Castillo Castellanos, Javier Ernesto; Castro, Andrew John; Casula, Ester Anna Rita; Ceballos Sanchez, Cesar; Cepila, Jan; Cerello, Piergiorgio; Cerkala, Jakub; Chang, Beomsu; Chapeland, Sylvain; Chartier, Marielle; Charvet, Jean-Luc Fernand; Chattopadhyay, Subhasis; Chattopadhyay, Sukalyan; Chauvin, Alex; Chelnokov, Volodymyr; Cherney, Michael Gerard; Cheshkov, Cvetan Valeriev; Cheynis, Brigitte; Chibante Barroso, Vasco Miguel; Dobrigkeit Chinellato, David; Cho, Soyeon; Chochula, Peter; Choi, Kyungeon; Chojnacki, Marek; Choudhury, Subikash; Christakoglou, Panagiotis; Christensen, Christian Holm; Christiansen, Peter; Chujo, Tatsuya; Chung, Suh-Urk; Cicalo, Corrado; Cifarelli, Luisa; Cindolo, Federico; Cleymans, Jean Willy Andre; Colamaria, Fabio Filippo; Colella, Domenico; Collu, Alberto; Colocci, Manuel; Conesa Balbastre, Gustavo; Conesa Del Valle, Zaida; Connors, Megan Elizabeth; Contreras Nuno, Jesus Guillermo; Cormier, Thomas Michael; Corrales Morales, Yasser; Cortes Maldonado, Ismael; Cortese, Pietro; Cosentino, Mauro Rogerio; Costa, Filippo; Crochet, Philippe; Cruz Albino, Rigoberto; Cuautle Flores, Eleazar; Cunqueiro Mendez, Leticia; Dahms, Torsten; Dainese, Andrea; Danisch, Meike Charlotte; Danu, Andrea; Das, Debasish; Das, Indranil; Das, Supriya; Dash, Ajay Kumar; Dash, Sadhana; De, Sudipan; De Caro, Annalisa; De Cataldo, Giacinto; De Conti, Camila; De Cuveland, Jan; De Falco, Alessandro; De Gruttola, Daniele; De Marco, Nora; De Pasquale, Salvatore; Deisting, Alexander; Deloff, Andrzej; Denes, Ervin Sandor; Deplano, Caterina; Dhankher, Preeti; Di Bari, Domenico; Di Mauro, Antonio; Di Nezza, Pasquale; Diaz Corchero, Miguel Angel; Dietel, Thomas; Dillenseger, Pascal; Divia, Roberto; Djuvsland, Oeystein; Dobrin, Alexandru Florin; Domenicis Gimenez, Diogenes; Donigus, Benjamin; Dordic, Olja; Drozhzhova, Tatiana; Dubey, Anand Kumar; Dubla, Andrea; Ducroux, Laurent; Dupieux, Pascal; Ehlers Iii, Raymond James; Elia, Domenico; Endress, Eric; Engel, Heiko; Epple, Eliane; Erazmus, Barbara Ewa; Erdemir, Irem; Erhardt, Filip; Espagnon, Bruno; Estienne, Magali Danielle; Esumi, Shinichi; Eum, Jongsik; Evans, David; Evdokimov, Sergey; Eyyubova, Gyulnara; Fabbietti, Laura; Fabris, Daniela; Faivre, Julien; Fantoni, Alessandra; Fasel, Markus; Feldkamp, Linus; Feliciello, Alessandro; Feofilov, Grigorii; Ferencei, Jozef; Fernandez Tellez, Arturo; Gonzalez Ferreiro, Elena; Ferretti, Alessandro; Festanti, Andrea; Feuillard, Victor Jose Gaston; Figiel, Jan; Araujo Silva Figueredo, Marcel; Filchagin, Sergey; Finogeev, Dmitry; Fionda, Fiorella; Fiore, Enrichetta Maria; Fleck, Martin Gabriel; Floris, Michele; Foertsch, Siegfried Valentin; Foka, Panagiota; Fokin, Sergey; Fragiacomo, Enrico; Francescon, Andrea; Frankenfeld, Ulrich Michael; Fronze, Gabriele Gaetano; Fuchs, Ulrich; Furget, Christophe; Furs, Artur; Fusco Girard, Mario; Gaardhoeje, Jens Joergen; Gagliardi, Martino; Gago Medina, Alberto Martin; Gallio, Mauro; Gangadharan, Dhevan Raja; Ganoti, Paraskevi; Gao, Chaosong; Garabatos Cuadrado, Jose; Garcia-Solis, Edmundo Javier; Gargiulo, Corrado; Gasik, Piotr Jan; Gauger, Erin Frances; Germain, Marie; Gheata, Andrei George; Gheata, Mihaela; Ghosh, Premomoy; Ghosh, Sanjay Kumar; Gianotti, Paola; Giubellino, Paolo; Giubilato, Piero; Gladysz-Dziadus, Ewa; Glassel, Peter; Gomez Coral, Diego Mauricio; Gomez Ramirez, Andres; Sanchez Gonzalez, Andres; Gonzalez, Victor; Gonzalez Zamora, Pedro; Gorbunov, Sergey; Gorlich, Lidia Maria; Gotovac, Sven; Grabski, Varlen; Grachov, Oleg Anatolievich; Graczykowski, Lukasz Kamil; Graham, Katie Leanne; Grelli, Alessandro; Grigoras, Alina Gabriela; Grigoras, Costin; Grigoryev, Vladislav; Grigoryan, Ara; Grigoryan, Smbat; Grynyov, Borys; Grion, Nevio; Gronefeld, Julius Maximilian; Grosse-Oetringhaus, Jan Fiete; Grosso, Raffaele; Guber, Fedor; Guernane, Rachid; Guerzoni, Barbara; Gulbrandsen, Kristjan Herlache; Gunji, Taku; Gupta, Anik; Gupta, Ramni; Haake, Rudiger; Haaland, Oystein Senneset; Hadjidakis, Cynthia Marie; Haiduc, Maria; Hamagaki, Hideki; Hamar, Gergoe; Hamon, Julien Charles; Harris, John William; Harton, Austin Vincent; Hatzifotiadou, Despina; Hayashi, Shinichi; Heckel, Stefan Thomas; Hellbar, Ernst; Helstrup, Haavard; Herghelegiu, Andrei Ionut; Herrera Corral, Gerardo Antonio; Hess, Benjamin Andreas; Hetland, Kristin Fanebust; Hillemanns, Hartmut; Hippolyte, Boris; Horak, David; Hosokawa, Ritsuya; Hristov, Peter Zahariev; Humanic, Thomas; Hussain, Nur; Hussain, Tahir; Hutter, Dirk; Hwang, Dae Sung; Ilkaev, Radiy; Inaba, Motoi; Incani, Elisa; Ippolitov, Mikhail; Irfan, Muhammad; Ivanov, Marian; Ivanov, Vladimir; Izucheev, Vladimir; Jacazio, Nicolo; Jacobs, Peter Martin; Jadhav, Manoj Bhanudas; Jadlovska, Slavka; Jadlovsky, Jan; Jahnke, Cristiane; Jakubowska, Monika Joanna; Jang, Haeng Jin; Janik, Malgorzata Anna; Pahula Hewage, Sandun; Jena, Chitrasen; Jena, Satyajit; Jimenez Bustamante, Raul Tonatiuh; Jones, Peter Graham; Jusko, Anton; Kalinak, Peter; Kalweit, Alexander Philipp; Kamin, Jason Adrian; Kang, Ju Hwan; Kaplin, Vladimir; Kar, Somnath; Karasu Uysal, Ayben; Karavichev, Oleg; Karavicheva, Tatiana; Karayan, Lilit; Karpechev, Evgeny; Kebschull, Udo Wolfgang; Keidel, Ralf; Keijdener, Darius Laurens; Keil, Markus; Khan, Mohammed Mohisin; Khan, Palash; Khan, Shuaib Ahmad; Khanzadeev, Alexei; Kharlov, Yury; Kileng, Bjarte; Kim, Do Won; Kim, Dong Jo; Kim, Daehyeok; Kim, Hyeonjoong; Kim, Jinsook; Kim, Minwoo; Kim, Se Yong; Kim, Taesoo; Kirsch, Stefan; Kisel, Ivan; Kiselev, Sergey; Kisiel, Adam Ryszard; Kiss, Gabor; Klay, Jennifer Lynn; Klein, Carsten; Klein, Jochen; Klein-Boesing, Christian; Klewin, Sebastian; Kluge, Alexander; Knichel, Michael Linus; Knospe, Anders Garritt; Kobdaj, Chinorat; Kofarago, Monika; Kollegger, Thorsten; Kolozhvari, Anatoly; Kondratev, Valerii; Kondratyeva, Natalia; Kondratyuk, Evgeny; Konevskikh, Artem; Kopcik, Michal; Kostarakis, Panagiotis; Kour, Mandeep; Kouzinopoulos, Charalampos; Kovalenko, Oleksandr; Kovalenko, Vladimir; Kowalski, Marek; Koyithatta Meethaleveedu, Greeshma; Kralik, Ivan; Kravcakova, Adela; Krivda, Marian; Krizek, Filip; Kryshen, Evgeny; Krzewicki, Mikolaj; Kubera, Andrew Michael; Kucera, Vit; Kuhn, Christian Claude; Kuijer, Paulus Gerardus; Kumar, Ajay; Kumar, Jitendra; Kumar, Lokesh; Kumar, Shyam; Kurashvili, Podist; Kurepin, Alexander; Kurepin, Alexey; Kuryakin, Alexey; Kweon, Min Jung; Kwon, Youngil; La Pointe, Sarah Louise; La Rocca, Paola; Ladron De Guevara, Pedro; Lagana Fernandes, Caio; Lakomov, Igor; Langoy, Rune; Lara Martinez, Camilo Ernesto; Lardeux, Antoine Xavier; Lattuca, Alessandra; Laudi, Elisa; Lea, Ramona; Leardini, Lucia; Lee, Graham Richard; Lee, Seongjoo; Lehas, Fatiha; Lemmon, Roy Crawford; Lenti, Vito; Leogrande, Emilia; Leon Monzon, Ildefonso; Leon Vargas, Hermes; Leoncino, Marco; Levai, Peter; Li, Shuang; Li, Xiaomei; Lien, Jorgen Andre; Lietava, Roman; Lindal, Svein; Lindenstruth, Volker; Lippmann, Christian; Lisa, Michael Annan; Ljunggren, Hans Martin; Lodato, Davide Francesco; Lonne, Per-Ivar; Loginov, Vitaly; Loizides, Constantinos; Lopez, Xavier Bernard; Lopez Torres, Ernesto; Lowe, Andrew John; Luettig, Philipp Johannes; Lunardon, Marcello; Luparello, Grazia; Lutz, Tyler Harrison; Maevskaya, Alla; Mager, Magnus; Mahajan, Sanjay; Mahmood, Sohail Musa; Maire, Antonin; Majka, Richard Daniel; Malaev, Mikhail; Maldonado Cervantes, Ivonne Alicia; Malinina, Liudmila; Mal'Kevich, Dmitry; Malzacher, Peter; Mamonov, Alexander; Manko, Vladislav; Manso, Franck; Manzari, Vito; Marchisone, Massimiliano; Mares, Jiri; Margagliotti, Giacomo Vito; Margotti, Anselmo; Margutti, Jacopo; Marin, Ana Maria; Markert, Christina; Marquard, Marco; Martin, Nicole Alice; Martin Blanco, Javier; Martinengo, Paolo; Martinez Hernandez, Mario Ivan; Martinez-Garcia, Gines; Martinez Pedreira, Miguel; Mas, Alexis Jean-Michel; Masciocchi, Silvia; Masera, Massimo; Masoni, Alberto; Mastroserio, Annalisa; Matyja, Adam Tomasz; Mayer, Christoph; Mazer, Joel Anthony; Mazzoni, Alessandra Maria; Mcdonald, Daniel; Meddi, Franco; Melikyan, Yuri; Menchaca-Rocha, Arturo Alejandro; Meninno, Elisa; Mercado-Perez, Jorge; Meres, Michal; Miake, Yasuo; Mieskolainen, Matti Mikael; Mikhaylov, Konstantin; Milano, Leonardo; Milosevic, Jovan; Mischke, Andre; Mishra, Aditya Nath; Miskowiec, Dariusz Czeslaw; Mitra, Jubin; Mitu, Ciprian Mihai; Mohammadi, Naghmeh; Mohanty, Bedangadas; Molnar, Levente; Montano Zetina, Luis Manuel; Montes Prado, Esther; Moreira De Godoy, Denise Aparecida; Perez Moreno, Luis Alberto; Moretto, Sandra; Morreale, Astrid; Morsch, Andreas; Muccifora, Valeria; Mudnic, Eugen; Muhlheim, Daniel Michael; Muhuri, Sanjib; Mukherjee, Maitreyee; Mulligan, James Declan; Gameiro Munhoz, Marcelo; Munzer, Robert Helmut; Murakami, Hikari; Murray, Sean; Musa, Luciano; Musinsky, Jan; Naik, Bharati; Nair, Rahul; Nandi, Basanta Kumar; Nania, Rosario; Nappi, Eugenio; Naru, Muhammad Umair; Ferreira Natal Da Luz, Pedro Hugo; Nattrass, Christine; Rosado Navarro, Sebastian; Nayak, Kishora; Nayak, Ranjit; Nayak, Tapan Kumar; Nazarenko, Sergey; Nedosekin, Alexander; Nellen, Lukas; Ng, Fabian; Nicassio, Maria; Niculescu, Mihai; Niedziela, Jeremi; Nielsen, Borge Svane; Nikolaev, Sergey; Nikulin, Sergey; Nikulin, Vladimir; Noferini, Francesco; Nomokonov, Petr; Nooren, Gerardus; Cabanillas Noris, Juan Carlos; Norman, Jaime; Nyanin, Alexander; Nystrand, Joakim Ingemar; Oeschler, Helmut Oskar; Oh, Saehanseul; Oh, Sun Kun; Ohlson, Alice Elisabeth; Okatan, Ali; Okubo, Tsubasa; Olah, Laszlo; Oleniacz, Janusz; Oliveira Da Silva, Antonio Carlos; Oliver, Michael Henry; Onderwaater, Jacobus; Oppedisano, Chiara; Orava, Risto; Oravec, Matej; Ortiz Velasquez, Antonio; Oskarsson, Anders Nils Erik; Otwinowski, Jacek Tomasz; Oyama, Ken; Ozdemir, Mahmut; Pachmayer, Yvonne Chiara; Pagano, Davide; Pagano, Paola; Paic, Guy; Pal, Susanta Kumar; Pan, Jinjin; Pandey, Ashutosh Kumar; Papikyan, Vardanush; Pappalardo, Giuseppe; Pareek, Pooja; Park, Woojin; Parmar, Sonia; Passfeld, Annika; Paticchio, Vincenzo; Patra, Rajendra Nath; Paul, Biswarup; Pei, Hua; Peitzmann, Thomas; Pereira Da Costa, Hugo Denis Antonio; Peresunko, Dmitry Yurevich; Perez Lara, Carlos Eugenio; Perez Lezama, Edgar; Peskov, Vladimir; Pestov, Yury; Petracek, Vojtech; Petrov, Viacheslav; Petrovici, Mihai; Petta, Catia; Piano, Stefano; Pikna, Miroslav; Pillot, Philippe; Ozelin De Lima Pimentel, Lais; Pinazza, Ombretta; Pinsky, Lawrence; Piyarathna, Danthasinghe; Ploskon, Mateusz Andrzej; Planinic, Mirko; Pluta, Jan Marian; Pochybova, Sona; Podesta Lerma, Pedro Luis Manuel; Poghosyan, Martin; Polishchuk, Boris; Poljak, Nikola; Poonsawat, Wanchaloem; Pop, Amalia; Porteboeuf, Sarah Julie; Porter, R Jefferson; Pospisil, Jan; Prasad, Sidharth Kumar; Preghenella, Roberto; Prino, Francesco; Pruneau, Claude Andre; Pshenichnov, Igor; Puccio, Maximiliano; Puddu, Giovanna; Pujahari, Prabhat Ranjan; Punin, Valery; Putschke, Jorn Henning; Qvigstad, Henrik; Rachevski, Alexandre; Raha, Sibaji; Rajput, Sonia; Rak, Jan; Rakotozafindrabe, Andry Malala; Ramello, Luciano; Rami, Fouad; Raniwala, Rashmi; Raniwala, Sudhir; Rasanen, Sami Sakari; Rascanu, Bogdan Theodor; Rathee, Deepika; Read, Kenneth Francis; Redlich, Krzysztof; Reed, Rosi Jan; Rehman, Attiq Ur; Reichelt, Patrick Simon; Reidt, Felix; Ren, Xiaowen; Renfordt, Rainer Arno Ernst; Reolon, Anna Rita; Reshetin, Andrey; Reygers, Klaus Johannes; Riabov, Viktor; Ricci, Renato Angelo; Richert, Tuva Ora Herenui; Richter, Matthias Rudolph; Riedler, Petra; Riegler, Werner; Riggi, Francesco; Ristea, Catalin-Lucian; Rocco, Elena; Rodriguez Cahuantzi, Mario; Rodriguez Manso, Alis; Roeed, Ketil; Rogochaya, Elena; Rohr, David Michael; Roehrich, Dieter; Ronchetti, Federico; Ronflette, Lucile; Rosnet, Philippe; Rossi, Andrea; Roukoutakis, Filimon; Roy, Ankhi; Roy, Christelle Sophie; Roy, Pradip Kumar; Rubio Montero, Antonio Juan; Rui, Rinaldo; Russo, Riccardo; Ryabinkin, Evgeny; Ryabov, Yury; Rybicki, Andrzej; Saarinen, Sampo; Sadhu, Samrangy; Sadovskiy, Sergey; Safarik, Karel; Sahlmuller, Baldo; Sahoo, Pragati; Sahoo, Raghunath; Sahoo, Sarita; Sahu, Pradip Kumar; Saini, Jogender; Sakai, Shingo; Saleh, Mohammad Ahmad; Salzwedel, Jai Samuel Nielsen; Sambyal, Sanjeev Singh; Samsonov, Vladimir; Sandor, Ladislav; Sandoval, Andres; Sano, Masato; Sarkar, Debojit; Sarkar, Nachiketa; Sarma, Pranjal; Scapparone, Eugenio; Scarlassara, Fernando; Schiaua, Claudiu Cornel; Schicker, Rainer Martin; Schmidt, Christian Joachim; Schmidt, Hans Rudolf; Schuchmann, Simone; Schukraft, Jurgen; Schulc, Martin; Schutz, Yves Roland; Schwarz, Kilian Eberhard; Schweda, Kai Oliver; Scioli, Gilda; Scomparin, Enrico; Scott, Rebecca Michelle; Sefcik, Michal; Seger, Janet Elizabeth; Sekiguchi, Yuko; Sekihata, Daiki; Selyuzhenkov, Ilya; Senosi, Kgotlaesele; Senyukov, Serhiy; Serradilla Rodriguez, Eulogio; Sevcenco, Adrian; Shabanov, Arseniy; Shabetai, Alexandre; Shadura, Oksana; Shahoyan, Ruben; Shahzad, Muhammed Ikram; Shangaraev, Artem; Sharma, Ankita; Sharma, Mona; Sharma, Monika; Sharma, Natasha; Sheikh, Ashik Ikbal; Shigaki, Kenta; Shou, Qiye; Shtejer Diaz, Katherin; Sibiryak, Yury; Siddhanta, Sabyasachi; Sielewicz, Krzysztof Marek; Siemiarczuk, Teodor; Silvermyr, David Olle Rickard; Silvestre, Catherine Micaela; Simatovic, Goran; Simonetti, Giuseppe; Singaraju, Rama Narayana; Singh, Ranbir; Singha, Subhash; Singhal, Vikas; Sinha, Bikash; Sarkar - Sinha, Tinku; Sitar, Branislav; Sitta, Mario; Skaali, Bernhard; Slupecki, Maciej; Smirnov, Nikolai; Snellings, Raimond; Snellman, Tomas Wilhelm; Song, Jihye; Song, Myunggeun; Song, Zixuan; Soramel, Francesca; Sorensen, Soren Pontoppidan; Derradi De Souza, Rafael; Sozzi, Federica; Spacek, Michal; Spiriti, Eleuterio; Sputowska, Iwona Anna; Spyropoulou-Stassinaki, Martha; Stachel, Johanna; Stan, Ionel; Stankus, Paul; Stenlund, Evert Anders; Steyn, Gideon Francois; Stiller, Johannes Hendrik; Stocco, Diego; Strmen, Peter; Alarcon Do Passo Suaide, Alexandre; Sugitate, Toru; Suire, Christophe Pierre; Suleymanov, Mais Kazim Oglu; Suljic, Miljenko; Sultanov, Rishat; Sumbera, Michal; Sumowidagdo, Suharyo; Szabo, Alexander; Szanto De Toledo, Alejandro; Szarka, Imrich; Szczepankiewicz, Adam; Szymanski, Maciej Pawel; Tabassam, Uzma; Takahashi, Jun; Tambave, Ganesh Jagannath; Tanaka, Naoto; Tarhini, Mohamad; Tariq, Mohammad; Tarzila, Madalina-Gabriela; Tauro, Arturo; Tejeda Munoz, Guillermo; Telesca, Adriana; Terasaki, Kohei; Terrevoli, Cristina; Teyssier, Boris; Thaeder, Jochen Mathias; Thakur, Dhananjaya; Thomas, Deepa; Tieulent, Raphael Noel; Timmins, Anthony Robert; Toia, Alberica; Trogolo, Stefano; Trombetta, Giuseppe; Trubnikov, Victor; Trzaska, Wladyslaw Henryk; Tsuji, Tomoya; Tumkin, Alexandr; Turrisi, Rosario; Tveter, Trine Spedstad; Ullaland, Kjetil; Uras, Antonio; Usai, Gianluca; Utrobicic, Antonija; Vala, Martin; Valencia Palomo, Lizardo; Vallero, Sara; Van Der Maarel, Jasper; Van Hoorne, Jacobus Willem; Van Leeuwen, Marco; Vanat, Tomas; Vande Vyvre, Pierre; Varga, Dezso; Diozcora Vargas Trevino, Aurora; Vargyas, Marton; Varma, Raghava; Vasileiou, Maria; Vasiliev, Andrey; Vauthier, Astrid; Vechernin, Vladimir; Veen, Annelies Marianne; Veldhoen, Misha; Velure, Arild; Vercellin, Ermanno; Vergara Limon, Sergio; Vernet, Renaud; Verweij, Marta; Vickovic, Linda; Viesti, Giuseppe; Viinikainen, Jussi Samuli; Vilakazi, Zabulon; Villalobos Baillie, Orlando; Villatoro Tello, Abraham; Vinogradov, Alexander; Vinogradov, Leonid; Vinogradov, Yury; Virgili, Tiziano; Vislavicius, Vytautas; Viyogi, Yogendra; Vodopyanov, Alexander; Volkl, Martin Andreas; Voloshin, Kirill; Voloshin, Sergey; Volpe, Giacomo; Von Haller, Barthelemy; Vorobyev, Ivan; Vranic, Danilo; Vrlakova, Janka; Vulpescu, Bogdan; Wagner, Boris; Wagner, Jan; Wang, Hongkai; Wang, Mengliang; Watanabe, Daisuke; Watanabe, Yosuke; Weber, Michael; Weber, Steffen Georg; Weiser, Dennis Franz; Wessels, Johannes Peter; Westerhoff, Uwe; Whitehead, Andile Mothegi; Wiechula, Jens; Wikne, Jon; Wilk, Grzegorz Andrzej; Wilkinson, Jeremy John; Williams, Crispin; Windelband, Bernd Stefan; Winn, Michael Andreas; Yang, Hongyan; Yang, Ping; Yano, Satoshi; Yasin, Zafar; Yin, Zhongbao; Yokoyama, Hiroki; Yoo, In-Kwon; Yoon, Jin Hee; Yurchenko, Volodymyr; Yushmanov, Igor; Zaborowska, Anna; Zaccolo, Valentina; Zaman, Ali; Zampolli, Chiara; Correia Zanoli, Henrique Jose; Zaporozhets, Sergey; Zardoshti, Nima; Zarochentsev, Andrey; Zavada, Petr; Zavyalov, Nikolay; Zbroszczyk, Hanna Paulina; Zgura, Sorin Ion; Zhalov, Mikhail; Zhang, Haitao; Zhang, Xiaoming; Zhang, Yonghong; Chunhui, Zhang; Zhang, Zuman; Zhao, Chengxin; Zhigareva, Natalia; Zhou, Daicui; Zhou, You; Zhou, Zhuo; Zhu, Hongsheng; Zhu, Jianhui; Zichichi, Antonino; Zimmermann, Alice; Zimmermann, Markus Bernhard; Zinovjev, Gennady; Zyzak, Maksym

    2016-05-25

    We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high purity samples of identified particles in the decay channels ${\\rm K}_{\\rm S}^{\\rm 0}\\rightarrow \\pi^+\\pi^-$, $\\phi\\rightarrow {\\rm K}^-{\\rm K}^+$ and $\\Lambda\\rightarrow{\\rm p}\\pi^-$ in p–Pb collisions at $\\sqrt{s_{\\rm NN}}= 5.02$TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected $p_{\\rm T}$ spectra of pions, kaons, protons, and D$^0$ mesons in pp coll...

  16. Sub-seasonal-to-seasonal Reservoir Inflow Forecast using Bayesian Hierarchical Hidden Markov Model

    Science.gov (United States)

    Mukhopadhyay, S.; Arumugam, S.

    2017-12-01

    Sub-seasonal-to-seasonal (S2S) (15-90 days) streamflow forecasting is an emerging area of research that provides seamless information for reservoir operation from weather time scales to seasonal time scales. From an operational perspective, sub-seasonal inflow forecasts are highly valuable as these enable water managers to decide short-term releases (15-30 days), while holding water for seasonal needs (e.g., irrigation and municipal supply) and to meet end-of-the-season target storage at a desired level. We propose a Bayesian Hierarchical Hidden Markov Model (BHHMM) to develop S2S inflow forecasts for the Tennessee Valley Area (TVA) reservoir system. Here, the hidden states are predicted by relevant indices that influence the inflows at S2S time scale. The hidden Markov model also captures the both spatial and temporal hierarchy in predictors that operate at S2S time scale with model parameters being estimated as a posterior distribution using a Bayesian framework. We present our work in two steps, namely single site model and multi-site model. For proof of concept, we consider inflows to Douglas Dam, Tennessee, in the single site model. For multisite model we consider reservoirs in the upper Tennessee valley. Streamflow forecasts are issued and updated continuously every day at S2S time scale. We considered precipitation forecasts obtained from NOAA Climate Forecast System (CFSv2) GCM as predictors for developing S2S streamflow forecasts along with relevant indices for predicting hidden states. Spatial dependence of the inflow series of reservoirs are also preserved in the multi-site model. To circumvent the non-normality of the data, we consider the HMM in a Generalized Linear Model setting. Skill of the proposed approach is tested using split sample validation against a traditional multi-site canonical correlation model developed using the same set of predictors. From the posterior distribution of the inflow forecasts, we also highlight different system behavior

  17. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    Science.gov (United States)

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  18. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python

    Directory of Open Access Journals (Sweden)

    Thomas V Wiecki

    2013-08-01

    Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs

  19. Hierarchical Bayesian nonparametric mixture models for clustering with variable relevance determination.

    Science.gov (United States)

    Yau, Christopher; Holmes, Chris

    2011-07-01

    We propose a hierarchical Bayesian nonparametric mixture model for clustering when some of the covariates are assumed to be of varying relevance to the clustering problem. This can be thought of as an issue in variable selection for unsupervised learning. We demonstrate that by defining a hierarchical population based nonparametric prior on the cluster locations scaled by the inverse covariance matrices of the likelihood we arrive at a 'sparsity prior' representation which admits a conditionally conjugate prior. This allows us to perform full Gibbs sampling to obtain posterior distributions over parameters of interest including an explicit measure of each covariate's relevance and a distribution over the number of potential clusters present in the data. This also allows for individual cluster specific variable selection. We demonstrate improved inference on a number of canonical problems.

  20. Relating Memory To Functional Performance In Normal Aging to Dementia Using Hierarchical Bayesian Cognitive Processing Models

    Science.gov (United States)

    Shankle, William R.; Pooley, James P.; Steyvers, Mark; Hara, Junko; Mangrola, Tushar; Reisberg, Barry; Lee, Michael D.

    2012-01-01

    Determining how cognition affects functional abilities is important in Alzheimer’s disease and related disorders (ADRD). 280 patients (normal or ADRD) received a total of 1,514 assessments using the Functional Assessment Staging Test (FAST) procedure and the MCI Screen (MCIS). A hierarchical Bayesian cognitive processing (HBCP) model was created by embedding a signal detection theory (SDT) model of the MCIS delayed recognition memory task into a hierarchical Bayesian framework. The SDT model used latent parameters of discriminability (memory process) and response bias (executive function) to predict, simultaneously, recognition memory performance for each patient and each FAST severity group. The observed recognition memory data did not distinguish the six FAST severity stages, but the latent parameters completely separated them. The latent parameters were also used successfully to transform the ordinal FAST measure into a continuous measure reflecting the underlying continuum of functional severity. HBCP models applied to recognition memory data from clinical practice settings accurately translated a latent measure of cognition to a continuous measure of functional severity for both individuals and FAST groups. Such a translation links two levels of brain information processing, and may enable more accurate correlations with other levels, such as those characterized by biomarkers. PMID:22407225

  1. Prion Amplification and Hierarchical Bayesian Modeling Refine Detection of Prion Infection

    Science.gov (United States)

    Wyckoff, A. Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J.; Pulford, Bruce; Wild, Margaret; Antolin, Michael; Vercauteren, Kurt; Zabel, Mark

    2015-02-01

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  2. Prion amplification and hierarchical Bayesian modeling refine detection of prion infection.

    Science.gov (United States)

    Wyckoff, A Christy; Galloway, Nathan; Meyerett-Reid, Crystal; Powers, Jenny; Spraker, Terry; Monello, Ryan J; Pulford, Bruce; Wild, Margaret; Antolin, Michael; VerCauteren, Kurt; Zabel, Mark

    2015-02-10

    Prions are unique infectious agents that replicate without a genome and cause neurodegenerative diseases that include chronic wasting disease (CWD) of cervids. Immunohistochemistry (IHC) is currently considered the gold standard for diagnosis of a prion infection but may be insensitive to early or sub-clinical CWD that are important to understanding CWD transmission and ecology. We assessed the potential of serial protein misfolding cyclic amplification (sPMCA) to improve detection of CWD prior to the onset of clinical signs. We analyzed tissue samples from free-ranging Rocky Mountain elk (Cervus elaphus nelsoni) and used hierarchical Bayesian analysis to estimate the specificity and sensitivity of IHC and sPMCA conditional on simultaneously estimated disease states. Sensitivity estimates were higher for sPMCA (99.51%, credible interval (CI) 97.15-100%) than IHC of obex (brain stem, 76.56%, CI 57.00-91.46%) or retropharyngeal lymph node (90.06%, CI 74.13-98.70%) tissues, or both (98.99%, CI 90.01-100%). Our hierarchical Bayesian model predicts the prevalence of prion infection in this elk population to be 18.90% (CI 15.50-32.72%), compared to previous estimates of 12.90%. Our data reveal a previously unidentified sub-clinical prion-positive portion of the elk population that could represent silent carriers capable of significantly impacting CWD ecology.

  3. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  4. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  5. A Bayesian concept learning approach to crowdsourcing

    DEFF Research Database (Denmark)

    Viappiani, P.; Zilles, S.; Hamilton, H.J.

    2011-01-01

    techniques, inference methods, and query selection strategies to assist a user charged with choosing a configuration that satisfies some (partially known) concept. Our model is able to simultaneously learn the concept definition and the types of the experts. We evaluate our model with simulations, showing......We develop a Bayesian approach to concept learning for crowdsourcing applications. A probabilistic belief over possible concept definitions is maintained and updated according to (noisy) observations from experts, whose behaviors are modeled using discrete types. We propose recommendation...

  6. Probabilistic daily ILI syndromic surveillance with a spatio-temporal Bayesian hierarchical model.

    Directory of Open Access Journals (Sweden)

    Ta-Chien Chan

    Full Text Available BACKGROUND: For daily syndromic surveillance to be effective, an efficient and sensible algorithm would be expected to detect aberrations in influenza illness, and alert public health workers prior to any impending epidemic. This detection or alert surely contains uncertainty, and thus should be evaluated with a proper probabilistic measure. However, traditional monitoring mechanisms simply provide a binary alert, failing to adequately address this uncertainty. METHODS AND FINDINGS: Based on the Bayesian posterior probability of influenza-like illness (ILI visits, the intensity of outbreak can be directly assessed. The numbers of daily emergency room ILI visits at five community hospitals in Taipei City during 2006-2007 were collected and fitted with a Bayesian hierarchical model containing meteorological factors such as temperature and vapor pressure, spatial interaction with conditional autoregressive structure, weekend and holiday effects, seasonality factors, and previous ILI visits. The proposed algorithm recommends an alert for action if the posterior probability is larger than 70%. External data from January to February of 2008 were retained for validation. The decision rule detects successfully the peak in the validation period. When comparing the posterior probability evaluation with the modified Cusum method, results show that the proposed method is able to detect the signals 1-2 days prior to the rise of ILI visits. CONCLUSIONS: This Bayesian hierarchical model not only constitutes a dynamic surveillance system but also constructs a stochastic evaluation of the need to call for alert. The monitoring mechanism provides earlier detection as well as a complementary tool for current surveillance programs.

  7. ScreenBEAM: a novel meta-analysis algorithm for functional genomics screens via Bayesian hierarchical modeling.

    Science.gov (United States)

    Yu, Jiyang; Silva, Jose; Califano, Andrea

    2016-01-15

    Functional genomics (FG) screens, using RNAi or CRISPR technology, have become a standard tool for systematic, genome-wide loss-of-function studies for therapeutic target discovery. As in many large-scale assays, however, off-target effects, variable reagents' potency and experimental noise must be accounted for appropriately control for false positives. Indeed, rigorous statistical analysis of high-throughput FG screening data remains challenging, particularly when integrative analyses are used to combine multiple sh/sgRNAs targeting the same gene in the library. We use large RNAi and CRISPR repositories that are publicly available to evaluate a novel meta-analysis approach for FG screens via Bayesian hierarchical modeling, Screening Bayesian Evaluation and Analysis Method (ScreenBEAM). Results from our analysis show that the proposed strategy, which seamlessly combines all available data, robustly outperforms classical algorithms developed for microarray data sets as well as recent approaches designed for next generation sequencing technologies. Remarkably, the ScreenBEAM algorithm works well even when the quality of FG screens is relatively low, which accounts for about 80-95% of the public datasets. R package and source code are available at: https://github.com/jyyu/ScreenBEAM. ac2248@columbia.edu, jose.silva@mssm.edu, yujiyang@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. A Bayesian approach to person perception.

    Science.gov (United States)

    Clifford, C W G; Mareschal, I; Otsuka, Y; Watson, T L

    2015-11-01

    Here we propose a Bayesian approach to person perception, outlining the theoretical position and a methodological framework for testing the predictions experimentally. We use the term person perception to refer not only to the perception of others' personal attributes such as age and sex but also to the perception of social signals such as direction of gaze and emotional expression. The Bayesian approach provides a formal description of the way in which our perception combines current sensory evidence with prior expectations about the structure of the environment. Such expectations can lead to unconscious biases in our perception that are particularly evident when sensory evidence is uncertain. We illustrate the ideas with reference to our recent studies on gaze perception which show that people have a bias to perceive the gaze of others as directed towards themselves. We also describe a potential application to the study of the perception of a person's sex, in which a bias towards perceiving males is typically observed. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. A Bayesian Panel Data Approach to Explaining Market Beta Dynamics

    NARCIS (Netherlands)

    R. Bauer (Rob); M.M.J.E. Cosemans (Mathijs); R. Frehen (Rik); P.C. Schotman (Peter)

    2008-01-01

    markdownabstractWe characterize the process that drives the market betas of individual stocks by setting up a hierarchical Bayesian panel data model that allows a flexible specification for beta. We show that combining the parametric relationship between betas and conditioning variables specified by

  10. Estimating temporal trend in the presence of spatial complexity: a Bayesian hierarchical model for a wetland plant population undergoing restoration.

    Directory of Open Access Journals (Sweden)

    Thomas J Rodhouse

    Full Text Available Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas] population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones" with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity--a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.

  11. Correlation Between Hierarchical Bayesian and Aerosol Optical Depth PM2.5 Data and Respiratory-Cardiovascular Chronic Diseases

    Science.gov (United States)

    Tools to estimate PM2.5 mass have expanded in recent years, and now include: 1) stationary monitor readings, 2) Community Multi-Scale Air Quality (CMAQ) model estimates, 3) Hierarchical Bayesian (HB) estimates from combined stationary monitor readings and CMAQ model output; and, ...

  12. Estimating effectiveness in HIV prevention trials with a Bayesian hierarchical compound Poisson frailty model

    Science.gov (United States)

    Coley, Rebecca Yates; Browna, Elizabeth R.

    2016-01-01

    Inconsistent results in recent HIV prevention trials of pre-exposure prophylactic interventions may be due to heterogeneity in risk among study participants. Intervention effectiveness is most commonly estimated with the Cox model, which compares event times between populations. When heterogeneity is present, this population-level measure underestimates intervention effectiveness for individuals who are at risk. We propose a likelihood-based Bayesian hierarchical model that estimates the individual-level effectiveness of candidate interventions by accounting for heterogeneity in risk with a compound Poisson-distributed frailty term. This model reflects the mechanisms of HIV risk and allows that some participants are not exposed to HIV and, therefore, have no risk of seroconversion during the study. We assess model performance via simulation and apply the model to data from an HIV prevention trial. PMID:26869051

  13. Spatial Intensity Duration Frequency Relationships Using Hierarchical Bayesian Analysis for Urban Areas

    Science.gov (United States)

    Rupa, Chandra; Mujumdar, Pradeep

    2016-04-01

    In urban areas, quantification of extreme precipitation is important in the design of storm water drains and other infrastructure. Intensity Duration Frequency (IDF) relationships are generally used to obtain design return level for a given duration and return period. Due to lack of availability of extreme precipitation data for sufficiently large number of years, estimating the probability of extreme events is difficult. Typically, a single station data is used to obtain the design return levels for various durations and return periods, which are used in the design of urban infrastructure for the entire city. In an urban setting, the spatial variation of precipitation can be high; the precipitation amounts and patterns often vary within short distances of less than 5 km. Therefore it is crucial to study the uncertainties in the spatial variation of return levels for various durations. In this work, the extreme precipitation is modeled spatially using the Bayesian hierarchical analysis and the spatial variation of return levels is studied. The analysis is carried out with Block Maxima approach for defining the extreme precipitation, using Generalized Extreme Value (GEV) distribution for Bangalore city, Karnataka state, India. Daily data for nineteen stations in and around Bangalore city is considered in the study. The analysis is carried out for summer maxima (March - May), monsoon maxima (June - September) and the annual maxima rainfall. In the hierarchical analysis, the statistical model is specified in three layers. The data layer models the block maxima, pooling the extreme precipitation from all the stations. In the process layer, the latent spatial process characterized by geographical and climatological covariates (lat-lon, elevation, mean temperature etc.) which drives the extreme precipitation is modeled and in the prior level, the prior distributions that govern the latent process are modeled. Markov Chain Monte Carlo (MCMC) algorithm (Metropolis Hastings

  14. TYPE Ia SUPERNOVA LIGHT-CURVE INFERENCE: HIERARCHICAL BAYESIAN ANALYSIS IN THE NEAR-INFRARED

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Friedman, Andrew S.; Kirshner, Robert P.; Wood-Vasey, W. Michael

    2009-01-01

    We present a comprehensive statistical analysis of the properties of Type Ia supernova (SN Ia) light curves in the near-infrared using recent data from Peters Automated InfraRed Imaging TELescope and the literature. We construct a hierarchical Bayesian framework, incorporating several uncertainties including photometric error, peculiar velocities, dust extinction, and intrinsic variations, for principled and coherent statistical inference. SN Ia light-curve inferences are drawn from the global posterior probability of parameters describing both individual supernovae and the population conditioned on the entire SN Ia NIR data set. The logical structure of the hierarchical model is represented by a directed acyclic graph. Fully Bayesian analysis of the model and data is enabled by an efficient Markov Chain Monte Carlo algorithm exploiting the conditional probabilistic structure using Gibbs sampling. We apply this framework to the JHK s SN Ia light-curve data. A new light-curve model captures the observed J-band light-curve shape variations. The marginal intrinsic variances in peak absolute magnitudes are σ(M J ) = 0.17 ± 0.03, σ(M H ) = 0.11 ± 0.03, and σ(M Ks ) = 0.19 ± 0.04. We describe the first quantitative evidence for correlations between the NIR absolute magnitudes and J-band light-curve shapes, and demonstrate their utility for distance estimation. The average residual in the Hubble diagram for the training set SNe at cz > 2000kms -1 is 0.10 mag. The new application of bootstrap cross-validation to SN Ia light-curve inference tests the sensitivity of the statistical model fit to the finite sample and estimates the prediction error at 0.15 mag. These results demonstrate that SN Ia NIR light curves are as effective as corrected optical light curves, and, because they are less vulnerable to dust absorption, they have great potential as precise and accurate cosmological distance indicators.

  15. Bayesian approaches for detecting significant deterioration

    International Nuclear Information System (INIS)

    Roed, Willy; Aven, Terje

    2009-01-01

    Risk indicators can provide useful input to risk management processes and are given increased attention in the Norwegian petroleum industry. Examples include indicators expressing the proportion of test failures of safety and barrier systems. Such indicators give valuable information about the performance of the systems and provide a basis for trend evaluations. Early warning of a possible deterioration is essential due to the importance of the systems in focus, but what should be the basis for the warning criterion? This paper presents and discusses several Bayesian approaches for the establishment of a warning criterion to disclose significant deterioration. The Norwegian petroleum industry is the starting point for this paper, but the study is relevant for other application areas as well

  16. Hierarchical Bayesian inference of the initial mass function in composite stellar populations

    Science.gov (United States)

    Dries, M.; Trager, S. C.; Koopmans, L. V. E.; Popping, G.; Somerville, R. S.

    2018-03-01

    The initial mass function (IMF) is a key ingredient in many studies of galaxy formation and evolution. Although the IMF is often assumed to be universal, there is continuing evidence that it is not universal. Spectroscopic studies that derive the IMF of the unresolved stellar populations of a galaxy often assume that this spectrum can be described by a single stellar population (SSP). To alleviate these limitations, in this paper we have developed a unique hierarchical Bayesian framework for modelling composite stellar populations (CSPs). Within this framework, we use a parametrized IMF prior to regulate a direct inference of the IMF. We use this new framework to determine the number of SSPs that is required to fit a set of realistic CSP mock spectra. The CSP mock spectra that we use are based on semi-analytic models and have an IMF that varies as a function of stellar velocity dispersion of the galaxy. Our results suggest that using a single SSP biases the determination of the IMF slope to a higher value than the true slope, although the trend with stellar velocity dispersion is overall recovered. If we include more SSPs in the fit, the Bayesian evidence increases significantly and the inferred IMF slopes of our mock spectra converge, within the errors, to their true values. Most of the bias is already removed by using two SSPs instead of one. We show that we can reconstruct the variable IMF of our mock spectra for signal-to-noise ratios exceeding ˜75.

  17. Clarifying the Hubble constant tension with a Bayesian hierarchical model of the local distance ladder

    Science.gov (United States)

    Feeney, Stephen M.; Mortlock, Daniel J.; Dalmasso, Niccolò

    2018-05-01

    Estimates of the Hubble constant, H0, from the local distance ladder and from the cosmic microwave background (CMB) are discrepant at the ˜3σ level, indicating a potential issue with the standard Λ cold dark matter (ΛCDM) cosmology. A probabilistic (i.e. Bayesian) interpretation of this tension requires a model comparison calculation, which in turn depends strongly on the tails of the H0 likelihoods. Evaluating the tails of the local H0 likelihood requires the use of non-Gaussian distributions to faithfully represent anchor likelihoods and outliers, and simultaneous fitting of the complete distance-ladder data set to ensure correct uncertainty propagation. We have hence developed a Bayesian hierarchical model of the full distance ladder that does not rely on Gaussian distributions and allows outliers to be modelled without arbitrary data cuts. Marginalizing over the full ˜3000-parameter joint posterior distribution, we find H0 = (72.72 ± 1.67) km s-1 Mpc-1 when applied to the outlier-cleaned Riess et al. data, and (73.15 ± 1.78) km s-1 Mpc-1 with supernova outliers reintroduced (the pre-cut Cepheid data set is not available). Using our precise evaluation of the tails of the H0 likelihood, we apply Bayesian model comparison to assess the evidence for deviation from ΛCDM given the distance-ladder and CMB data. The odds against ΛCDM are at worst ˜10:1 when considering the Planck 2015 XIII data, regardless of outlier treatment, considerably less dramatic than naïvely implied by the 2.8σ discrepancy. These odds become ˜60:1 when an approximation to the more-discrepant Planck Intermediate XLVI likelihood is included.

  18. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  19. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  20. Top-down feedback in an HMAX-like cortical model of object perception based on hierarchical Bayesian networks and belief propagation.

    Directory of Open Access Journals (Sweden)

    Salvador Dura-Bernal

    Full Text Available Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance. Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom

  1. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    International Nuclear Information System (INIS)

    Ma Xiang; Zabaras, Nicholas

    2009-01-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media

  2. A Bayesian nonparametric approach to causal inference on quantiles.

    Science.gov (United States)

    Xu, Dandan; Daniels, Michael J; Winterstein, Almut G

    2018-02-25

    We propose a Bayesian nonparametric approach (BNP) for causal inference on quantiles in the presence of many confounders. In particular, we define relevant causal quantities and specify BNP models to avoid bias from restrictive parametric assumptions. We first use Bayesian additive regression trees (BART) to model the propensity score and then construct the distribution of potential outcomes given the propensity score using a Dirichlet process mixture (DPM) of normals model. We thoroughly evaluate the operating characteristics of our approach and compare it to Bayesian and frequentist competitors. We use our approach to answer an important clinical question involving acute kidney injury using electronic health records. © 2018, The International Biometric Society.

  3. MACROECONOMIC FORECASTING USING BAYESIAN VECTOR AUTOREGRESSIVE APPROACH

    Directory of Open Access Journals (Sweden)

    D. Tutberidze

    2017-04-01

    Full Text Available There are many arguments that can be advanced to support the forecasting activities of business entities. The underlying argument in favor of forecasting is that managerial decisions are significantly dependent on proper evaluation of future trends as market conditions are constantly changing and require a detailed analysis of future dynamics. The article discusses the importance of using reasonable macro-econometric tool by suggesting the idea of conditional forecasting through a Vector Autoregressive (VAR modeling framework. Under this framework, a macroeconomic model for Georgian economy is constructed with the few variables believed to be shaping business environment. Based on the model, forecasts of macroeconomic variables are produced, and three types of scenarios are analyzed - a baseline and two alternative ones. The results of the study provide confirmatory evidence that suggested methodology is adequately addressing the research phenomenon and can be used widely by business entities in responding their strategic and operational planning challenges. Given this set-up, it is shown empirically that Bayesian Vector Autoregressive approach provides reasonable forecasts for the variables of interest.

  4. Parameterization of aquatic ecosystem functioning and its natural variation: Hierarchical Bayesian modelling of plankton food web dynamics

    Science.gov (United States)

    Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede

    2017-10-01

    Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.

  5. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    Science.gov (United States)

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods

  6. Epigenetic change detection and pattern recognition via Bayesian hierarchical hidden Markov models.

    Science.gov (United States)

    Wang, Xinlei; Zang, Miao; Xiao, Guanghua

    2013-06-15

    Epigenetics is the study of changes to the genome that can switch genes on or off and determine which proteins are transcribed without altering the DNA sequence. Recently, epigenetic changes have been linked to the development and progression of disease such as psychiatric disorders. High-throughput epigenetic experiments have enabled researchers to measure genome-wide epigenetic profiles and yield data consisting of intensity ratios of immunoprecipitation versus reference samples. The intensity ratios can provide a view of genomic regions where protein binding occur under one experimental condition and further allow us to detect epigenetic alterations through comparison between two different conditions. However, such experiments can be expensive, with only a few replicates available. Moreover, epigenetic data are often spatially correlated with high noise levels. In this paper, we develop a Bayesian hierarchical model, combined with hidden Markov processes with four states for modeling spatial dependence, to detect genomic sites with epigenetic changes from two-sample experiments with paired internal control. One attractive feature of the proposed method is that the four states of the hidden Markov process have well-defined biological meanings and allow us to directly call the change patterns based on the corresponding posterior probabilities. In contrast, none of existing methods can offer this advantage. In addition, the proposed method offers great power in statistical inference by spatial smoothing (via hidden Markov modeling) and information pooling (via hierarchical modeling). Both simulation studies and real data analysis in a cocaine addiction study illustrate the reliability and success of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    Science.gov (United States)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally

  8. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  9. A Bayesian Approach for Summarizing and Modeling Time-Series Exposure Data with Left Censoring.

    Science.gov (United States)

    Houseman, E Andres; Virji, M Abbas

    2017-08-01

    Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates

  10. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. II. Optimization model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    improvements. The biological model of the replacement model is described in a previous paper and in this paper the optimization model is described. The model is developed as a prototype for use under practical conditions. The application of the model is demonstrated using data from two commercial Danish sow......Recent methodological improvements in replacement models comprising multi-level hierarchical Markov processes and Bayesian updating have hardly been implemented in any replacement model and the aim of this study is to present a sow replacement model that really uses these methodological...... herds. It is concluded that the Bayesian updating technique and the hierarchical structure decrease the size of the state space dramatically. Since parameter estimates vary considerably among herds it is concluded that decision support concerning sow replacement only makes sense with parameters...

  11. A hierarchical method for Bayesian inference of rate parameters from shock tube data: Application to the study of the reaction of hydroxyl with 2-methylfuran

    KAUST Repository

    Kim, Daesang; El Gharamti, Iman; Hantouche, Mireille; Elwardani, Ahmed Elsaid; Farooq, Aamir; Bisetti, Fabrizio; Knio, Omar

    2017-01-01

    We developed a novel two-step hierarchical method for the Bayesian inference of the rate parameters of a target reaction from time-resolved concentration measurements in shock tubes. The method was applied to the calibration of the parameters

  12. Hierarchical Bayesian Spatio Temporal Model Comparison on the Earth Trapped Particle Forecast

    International Nuclear Information System (INIS)

    Suparta, Wayan; Gusrizal

    2014-01-01

    We compared two hierarchical Bayesian spatio temporal (HBST) results, Gaussian process (GP) and autoregressive (AR) models, on the Earth trapped particle forecast. Two models were employed on the South Atlantic Anomaly (SAA) region. Electron of >30 keV (mep0e1) from National Oceanic and Atmospheric Administration (NOAA) 15-18 satellites data was chosen as the particle modeled. We used two weeks data to perform the model fitting on a 5°x5° grid of longitude and latitude, and 31 August 2007 was set as the date of forecast. Three statistical validations were performed on the data, i.e. the root mean square error (RMSE), mean absolute percentage error (MAPE) and bias (BIAS). The statistical analysis showed that GP model performed better than AR with the average of RMSE = 0.38 and 0.63, MAPE = 11.98 and 17.30, and BIAS = 0.32 and 0.24, for GP and AR, respectively. Visual validation on both models with the NOAA map's also confirmed the superior of the GP than the AR. The variance of log flux minimum = 0.09 and 1.09, log flux maximum = 1.15 and 1.35, and in successively represents GP and AR

  13. Comparison of Extreme Precipitation Return Levels using Spatial Bayesian Hierarchical Modeling versus Regional Frequency Analysis

    Science.gov (United States)

    Love, C. A.; Skahill, B. E.; AghaKouchak, A.; Karlovits, G. S.; England, J. F.; Duren, A. M.

    2017-12-01

    We compare gridded extreme precipitation return levels obtained using spatial Bayesian hierarchical modeling (BHM) with their respective counterparts from a traditional regional frequency analysis (RFA) using the same set of extreme precipitation data. Our study area is the 11,478 square mile Willamette River basin (WRB) located in northwestern Oregon, a major tributary of the Columbia River whose 187 miles long main stem, the Willamette River, flows northward between the Coastal and Cascade Ranges. The WRB contains approximately two ­thirds of Oregon's population and 20 of the 25 most populous cities in the state. The U.S. Army Corps of Engineers (USACE) Portland District operates thirteen dams and extreme precipitation estimates are required to support risk­ informed hydrologic analyses as part of the USACE Dam Safety Program. Our intent is to profile for the USACE an alternate methodology to an RFA that was developed in 2008 due to the lack of an official NOAA Atlas 14 update for the state of Oregon. We analyze 24-hour annual precipitation maxima data for the WRB utilizing the spatial BHM R package "spatial.gev.bma", which has been shown to be efficient in developing coherent maps of extreme precipitation by return level. Our BHM modeling analysis involved application of leave-one-out cross validation (LOO-CV), which not only supported model selection but also a comprehensive assessment of location specific model performance. The LOO-CV results will provide a basis for the BHM RFA comparison.

  14. A bayesian hierarchical model for classification with selection of functional predictors.

    Science.gov (United States)

    Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D

    2010-06-01

    In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.

  15. Mapping brucellosis increases relative to elk density using hierarchical Bayesian models

    Science.gov (United States)

    Cross, Paul C.; Heisey, Dennis M.; Scurlock, Brandon M.; Edwards, William H.; Brennan, Angela; Ebinger, Michael R.

    2010-01-01

    The relationship between host density and parasite transmission is central to the effectiveness of many disease management strategies. Few studies, however, have empirically estimated this relationship particularly in large mammals. We applied hierarchical Bayesian methods to a 19-year dataset of over 6400 brucellosis tests of adult female elk (Cervus elaphus) in northwestern Wyoming. Management captures that occurred from January to March were over two times more likely to be seropositive than hunted elk that were killed in September to December, while accounting for site and year effects. Areas with supplemental feeding grounds for elk had higher seroprevalence in 1991 than other regions, but by 2009 many areas distant from the feeding grounds were of comparable seroprevalence. The increases in brucellosis seroprevalence were correlated with elk densities at the elk management unit, or hunt area, scale (mean 2070 km2; range = [95–10237]). The data, however, could not differentiate among linear and non-linear effects of host density. Therefore, control efforts that focus on reducing elk densities at a broad spatial scale were only weakly supported. Additional research on how a few, large groups within a region may be driving disease dynamics is needed for more targeted and effective management interventions. Brucellosis appears to be expanding its range into new regions and elk populations, which is likely to further complicate the United States brucellosis eradication program. This study is an example of how the dynamics of host populations can affect their ability to serve as disease reservoirs.

  16. Mapping brucellosis increases relative to elk density using hierarchical Bayesian models.

    Directory of Open Access Journals (Sweden)

    Paul C Cross

    Full Text Available The relationship between host density and parasite transmission is central to the effectiveness of many disease management strategies. Few studies, however, have empirically estimated this relationship particularly in large mammals. We applied hierarchical Bayesian methods to a 19-year dataset of over 6400 brucellosis tests of adult female elk (Cervus elaphus in northwestern Wyoming. Management captures that occurred from January to March were over two times more likely to be seropositive than hunted elk that were killed in September to December, while accounting for site and year effects. Areas with supplemental feeding grounds for elk had higher seroprevalence in 1991 than other regions, but by 2009 many areas distant from the feeding grounds were of comparable seroprevalence. The increases in brucellosis seroprevalence were correlated with elk densities at the elk management unit, or hunt area, scale (mean 2070 km(2; range = [95-10237]. The data, however, could not differentiate among linear and non-linear effects of host density. Therefore, control efforts that focus on reducing elk densities at a broad spatial scale were only weakly supported. Additional research on how a few, large groups within a region may be driving disease dynamics is needed for more targeted and effective management interventions. Brucellosis appears to be expanding its range into new regions and elk populations, which is likely to further complicate the United States brucellosis eradication program. This study is an example of how the dynamics of host populations can affect their ability to serve as disease reservoirs.

  17. Does mortality vary between Asian subgroups in New Zealand: an application of hierarchical Bayesian modelling.

    Directory of Open Access Journals (Sweden)

    Santosh Jatrana

    Full Text Available The aim of this paper was to see whether all-cause and cause-specific mortality rates vary between Asian ethnic subgroups, and whether overseas born Asian subgroup mortality rate ratios varied by nativity and duration of residence. We used hierarchical Bayesian methods to allow for sparse data in the analysis of linked census-mortality data for 25-75 year old New Zealanders. We found directly standardised posterior all-cause and cardiovascular mortality rates were highest for the Indian ethnic group, significantly so when compared with those of Chinese ethnicity. In contrast, cancer mortality rates were lowest for ethnic Indians. Asian overseas born subgroups have about 70% of the mortality rate of their New Zealand born Asian counterparts, a result that showed little variation by Asian subgroup or cause of death. Within the overseas born population, all-cause mortality rates for migrants living 0-9 years in New Zealand were about 60% of the mortality rate of those living more than 25 years in New Zealand regardless of ethnicity. The corresponding figure for cardiovascular mortality rates was 50%. However, while Chinese cancer mortality rates increased with duration of residence, Indian and Other Asian cancer mortality rates did not. Future research on the mechanisms of worsening of health with increased time spent in the host country is required to improve the understanding of the process, and would assist the policy-makers and health planners.

  18. Subjective value of risky foods for individual domestic chicks: a hierarchical Bayesian model.

    Science.gov (United States)

    Kawamori, Ai; Matsushima, Toshiya

    2010-05-01

    For animals to decide which prey to attack, the gain and delay of the food item must be integrated in a value function. However, the subjective value is not obtained by expected profitability when it is accompanied by risk. To estimate the subjective value, we examined choices in a cross-shaped maze with two colored feeders in domestic chicks. When tested by a reversal in food amount or delay, chicks changed choices similarly in both conditions (experiment 1). We therefore examined risk sensitivity for amount and delay (experiment 2) by supplying one feeder with food of fixed profitability and the alternative feeder with high- or low-profitability food at equal probability. Profitability varied in amount (groups 1 and 2 at high and low variance) or in delay (group 3). To find the equilibrium, the amount (groups 1 and 2) or delay (group 3) of the food in the fixed feeder was adjusted in a total of 18 blocks. The Markov chain Monte Carlo method was applied to a hierarchical Bayesian model to estimate the subjective value. Chicks undervalued the variable feeder in group 1 and were indifferent in group 2 but overvalued the variable feeder in group 3 at a population level. Re-examination without the titration procedure (experiment 3) suggested that the subjective value was not absolute for each option. When the delay was varied, the variable option was often given a paradoxically high value depending on fixed alternative. Therefore, the basic assumption of the uniquely determined value function might be questioned.

  19. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    Science.gov (United States)

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Estimating the Term Structure With a Semiparametric Bayesian Hierarchical Model: An Application to Corporate Bonds1

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.

    2011-01-01

    The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566

  1. COBRA: a Bayesian approach to pulsar searching

    Science.gov (United States)

    Lentati, L.; Champion, D. J.; Kramer, M.; Barr, E.; Torne, P.

    2018-02-01

    We introduce COBRA, a GPU-accelerated Bayesian analysis package for performing pulsar searching, that uses candidates from traditional search techniques to set the prior used for the periodicity of the source, and performs a blind search in all remaining parameters. COBRA incorporates models for both isolated and accelerated systems, as well as both Keplerian and relativistic binaries, and exploits pulse phase information to combine search epochs coherently, over time, frequency or across multiple telescopes. We demonstrate the efficacy of our approach in a series of simulations that challenge typical search techniques, including highly aliased signals, and relativistic binary systems. In the most extreme case, we simulate an 8 h observation containing 24 orbits of a pulsar in a binary with a 30 M⊙ companion. Even in this scenario we show that we can build up from an initial low-significance candidate, to fully recovering the signal. We also apply the method to survey data of three pulsars from the globular cluster 47Tuc: PSRs J0024-7204D, J0023-7203J and J0024-7204R. This final pulsar is in a 1.6 h binary, the shortest of any pulsar in 47Tuc, and additionally shows significant scintillation. By allowing the amplitude of the source to vary as a function of time, however, we show that we are able to obtain optimal combinations of such noisy data. We also demonstrate the ability of COBRA to perform high-precision pulsar timing directly on the single pulse survey data, and obtain a 95 per cent upper limit on the eccentricity of PSR J0024-7204R of εb < 0.0007.

  2. A Bayesian approach to particle identification in ALICE

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Among the LHC experiments, ALICE has unique particle identification (PID) capabilities exploiting different types of detectors. During Run 1, a Bayesian approach to PID was developed and intensively tested. It facilitates the combination of information from different sub-systems. The adopted methodology and formalism as well as the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE will be reviewed. Results are presented with PID performed via measurements of specific energy loss (dE/dx) and time-of-flight using information from the TPC and TOF detectors, respectively. Methods to extract priors from data and to compare PID efficiencies and misidentification probabilities in data and Monte Carlo using high-purity samples of identified particles will be presented. Bayesian PID results were found consistent with previous measurements published by ALICE. The Bayesian PID approach gives a higher signal-to-background ratio and a similar or larger statist...

  3. Identification of transmissivity fields using a Bayesian strategy and perturbative approach

    Science.gov (United States)

    Zanini, Andrea; Tanda, Maria Giovanna; Woodbury, Allan D.

    2017-10-01

    The paper deals with the crucial problem of the groundwater parameter estimation that is the basis for efficient modeling and reclamation activities. A hierarchical Bayesian approach is developed: it uses the Akaike's Bayesian Information Criteria in order to estimate the hyperparameters (related to the covariance model chosen) and to quantify the unknown noise variance. The transmissivity identification proceeds in two steps: the first, called empirical Bayesian interpolation, uses Y* (Y = lnT) observations to interpolate Y values on a specified grid; the second, called empirical Bayesian update, improve the previous Y estimate through the addition of hydraulic head observations. The relationship between the head and the lnT has been linearized through a perturbative solution of the flow equation. In order to test the proposed approach, synthetic aquifers from literature have been considered. The aquifers in question contain a variety of boundary conditions (both Dirichelet and Neuman type) and scales of heterogeneities (σY2 = 1.0 and σY2 = 5.3). The estimated transmissivity fields were compared to the true one. The joint use of Y* and head measurements improves the estimation of Y considering both degrees of heterogeneity. Even if the variance of the strong transmissivity field can be considered high for the application of the perturbative approach, the results show the same order of approximation of the non-linear methods proposed in literature. The procedure allows to compute the posterior probability distribution of the target quantities and to quantify the uncertainty in the model prediction. Bayesian updating has advantages related both to the Monte-Carlo (MC) and non-MC approaches. In fact, as the MC methods, Bayesian updating allows computing the direct posterior probability distribution of the target quantities and as non-MC methods it has computational times in the order of seconds.

  4. Daniel Goodman’s empirical approach to Bayesian statistics

    Science.gov (United States)

    Gerrodette, Tim; Ward, Eric; Taylor, Rebecca L.; Schwarz, Lisa K.; Eguchi, Tomoharu; Wade, Paul; Himes Boor, Gina

    2016-01-01

    Bayesian statistics, in contrast to classical statistics, uses probability to represent uncertainty about the state of knowledge. Bayesian statistics has often been associated with the idea that knowledge is subjective and that a probability distribution represents a personal degree of belief. Dr. Daniel Goodman considered this viewpoint problematic for issues of public policy. He sought to ground his Bayesian approach in data, and advocated the construction of a prior as an empirical histogram of “similar” cases. In this way, the posterior distribution that results from a Bayesian analysis combined comparable previous data with case-specific current data, using Bayes’ formula. Goodman championed such a data-based approach, but he acknowledged that it was difficult in practice. If based on a true representation of our knowledge and uncertainty, Goodman argued that risk assessment and decision-making could be an exact science, despite the uncertainties. In his view, Bayesian statistics is a critical component of this science because a Bayesian analysis produces the probabilities of future outcomes. Indeed, Goodman maintained that the Bayesian machinery, following the rules of conditional probability, offered the best legitimate inference from available data. We give an example of an informative prior in a recent study of Steller sea lion spatial use patterns in Alaska.

  5. A Bayesian Approach to Interactive Retrieval

    Science.gov (United States)

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  6. msBayes: Pipeline for testing comparative phylogeographic histories using hierarchical approximate Bayesian computation

    Directory of Open Access Journals (Sweden)

    Takebayashi Naoki

    2007-07-01

    Full Text Available Abstract Background Although testing for simultaneous divergence (vicariance across different population-pairs that span the same barrier to gene flow is of central importance to evolutionary biology, researchers often equate the gene tree and population/species tree thereby ignoring stochastic coalescent variance in their conclusions of temporal incongruence. In contrast to other available phylogeographic software packages, msBayes is the only one that analyses data from multiple species/population pairs under a hierarchical model. Results msBayes employs approximate Bayesian computation (ABC under a hierarchical coalescent model to test for simultaneous divergence (TSD in multiple co-distributed population-pairs. Simultaneous isolation is tested by estimating three hyper-parameters that characterize the degree of variability in divergence times across co-distributed population pairs while allowing for variation in various within population-pair demographic parameters (sub-parameters that can affect the coalescent. msBayes is a software package consisting of several C and R programs that are run with a Perl "front-end". Conclusion The method reasonably distinguishes simultaneous isolation from temporal incongruence in the divergence of co-distributed population pairs, even with sparse sampling of individuals. Because the estimate step is decoupled from the simulation step, one can rapidly evaluate different ABC acceptance/rejection conditions and the choice of summary statistics. Given the complex and idiosyncratic nature of testing multi-species biogeographic hypotheses, we envision msBayes as a powerful and flexible tool for tackling a wide array of difficult research questions that use population genetic data from multiple co-distributed species. The msBayes pipeline is available for download at http://msbayes.sourceforge.net/ under an open source license (GNU Public License. The msBayes pipeline is comprised of several C and R programs that

  7. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    Science.gov (United States)

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  8. TYPE Ia SUPERNOVA COLORS AND EJECTA VELOCITIES: HIERARCHICAL BAYESIAN REGRESSION WITH NON-GAUSSIAN DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J.

    2014-01-01

    We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocity (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10 3 km s –1 ) –1 for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A V extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances

  9. Testing adaptive toolbox models: a Bayesian hierarchical approach

    NARCIS (Netherlands)

    Scheibehenne, B.; Rieskamp, J.; Wagenmakers, E.-J.

    2013-01-01

    Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often

  10. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  11. Bayesian approach to inverse statistical mechanics

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  12. Personalized Audio Systems - a Bayesian Approach

    DEFF Research Database (Denmark)

    Nielsen, Jens Brehm; Jensen, Bjørn Sand; Hansen, Toke Jansen

    2013-01-01

    Modern audio systems are typically equipped with several user-adjustable parameters unfamiliar to most users listening to the system. To obtain the best possible setting, the user is forced into multi-parameter optimization with respect to the users's own objective and preference. To address this......, the present paper presents a general inter-active framework for personalization of such audio systems. The framework builds on Bayesian Gaussian process regression in which a model of the users's objective function is updated sequentially. The parameter setting to be evaluated in a given trial is selected...

  13. A sow replacement model using Bayesian updating in a three-level hierarchic Markov process. I. Biological model

    DEFF Research Database (Denmark)

    Kristensen, Anders Ringgaard; Søllested, Thomas Algot

    2004-01-01

    that really uses all these methodological improvements. In this paper, the biological model describing the performance and feed intake of sows is presented. In particular, estimation of herd specific parameters is emphasized. The optimization model is described in a subsequent paper......Several replacement models have been presented in literature. In other applicational areas like dairy cow replacement, various methodological improvements like hierarchical Markov processes and Bayesian updating have been implemented, but not in sow models. Furthermore, there are methodological...... improvements like multi-level hierarchical Markov processes with decisions on multiple time scales, efficient methods for parameter estimations at herd level and standard software that has been hardly implemented at all in any replacement model. The aim of this study is to present a sow replacement model...

  14. Internal cycling, not external loading, decides the nutrient limitation in eutrophic lake: A dynamic model with temporal Bayesian hierarchical inference.

    Science.gov (United States)

    Wu, Zhen; Liu, Yong; Liang, Zhongyao; Wu, Sifeng; Guo, Huaicheng

    2017-06-01

    Lake eutrophication is associated with excessive anthropogenic nutrients (mainly nitrogen (N) and phosphorus (P)) and unobserved internal nutrient cycling. Despite the advances in understanding the role of external loadings, the contribution of internal nutrient cycling is still an open question. A dynamic mass-balance model was developed to simulate and measure the contributions of internal cycling and external loading. It was based on the temporal Bayesian Hierarchical Framework (BHM), where we explored the seasonal patterns in the dynamics of nutrient cycling processes and the limitation of N and P on phytoplankton growth in hyper-eutrophic Lake Dianchi, China. The dynamic patterns of the five state variables (Chla, TP, ammonia, nitrate and organic N) were simulated based on the model. Five parameters (algae growth rate, sediment exchange rate of N and P, nitrification rate and denitrification rate) were estimated based on BHM. The model provided a good fit to observations. Our model results highlighted the role of internal cycling of N and P in Lake Dianchi. The internal cycling processes contributed more than external loading to the N and P changes in the water column. Further insights into the nutrient limitation analysis indicated that the sediment exchange of P determined the P limitation. Allowing for the contribution of denitrification to N removal, N was the more limiting nutrient in most of the time, however, P was the more important nutrient for eutrophication management. For Lake Dianchi, it would not be possible to recover solely by reducing the external watershed nutrient load; the mechanisms of internal cycling should also be considered as an approach to inhibit the release of sediments and to enhance denitrification. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Inferring cetacean population densities from the absolute dynamic topography of the ocean in a hierarchical Bayesian framework.

    Directory of Open Access Journals (Sweden)

    Mario A Pardo

    Full Text Available We inferred the population densities of blue whales (Balaenoptera musculus and short-beaked common dolphins (Delphinus delphis in the Northeast Pacific Ocean as functions of the water-column's physical structure by implementing hierarchical models in a Bayesian framework. This approach allowed us to propagate the uncertainty of the field observations into the inference of species-habitat relationships and to generate spatially explicit population density predictions with reduced effects of sampling heterogeneity. Our hypothesis was that the large-scale spatial distributions of these two cetacean species respond primarily to ecological processes resulting from shoaling and outcropping of the pycnocline in regions of wind-forced upwelling and eddy-like circulation. Physically, these processes affect the thermodynamic balance of the water column, decreasing its volume and thus the height of the absolute dynamic topography (ADT. Biologically, they lead to elevated primary productivity and persistent aggregation of low-trophic-level prey. Unlike other remotely sensed variables, ADT provides information about the structure of the entire water column and it is also routinely measured at high spatial-temporal resolution by satellite altimeters with uniform global coverage. Our models provide spatially explicit population density predictions for both species, even in areas where the pycnocline shoals but does not outcrop (e.g. the Costa Rica Dome and the North Equatorial Countercurrent thermocline ridge. Interannual variations in distribution during El Niño anomalies suggest that the population density of both species decreases dramatically in the Equatorial Cold Tongue and the Costa Rica Dome, and that their distributions retract to particular areas that remain productive, such as the more oceanic waters in the central California Current System, the northern Gulf of California, the North Equatorial Countercurrent thermocline ridge, and the more

  16. Global trends and factors associated with the illegal killing of elephants: A hierarchical bayesian analysis of carcass encounter data.

    Science.gov (United States)

    Burn, Robert W; Underwood, Fiona M; Blanc, Julian

    2011-01-01

    Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10(th) Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002-2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.

  17. Global trends and factors associated with the illegal killing of elephants: A hierarchical bayesian analysis of carcass encounter data.

    Directory of Open Access Journals (Sweden)

    Robert W Burn

    Full Text Available Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES. Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE, set up by the 10(th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002-2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.

  18. A Predictive Likelihood Approach to Bayesian Averaging

    Directory of Open Access Journals (Sweden)

    Tomáš Jeřábek

    2015-01-01

    Full Text Available Multivariate time series forecasting is applied in a wide range of economic activities related to regional competitiveness and is the basis of almost all macroeconomic analysis. In this paper we combine multivariate density forecasts of GDP growth, inflation and real interest rates from four various models, two type of Bayesian vector autoregression (BVAR models, a New Keynesian dynamic stochastic general equilibrium (DSGE model of small open economy and DSGE-VAR model. The performance of models is identified using historical dates including domestic economy and foreign economy, which is represented by countries of the Eurozone. Because forecast accuracy of observed models are different, the weighting scheme based on the predictive likelihood, the trace of past MSE matrix, model ranks are used to combine the models. The equal-weight scheme is used as a simple combination scheme. The results show that optimally combined densities are comparable to the best individual models.

  19. A Bayesian approach to degradation-based burn-in optimization for display products exhibiting two-phase degradation patterns

    International Nuclear Information System (INIS)

    Yuan, Tao; Bae, Suk Joo; Zhu, Xiaoyan

    2016-01-01

    Motivated by the two-phase degradation phenomena observed in light displays (e.g., plasma display panels (PDPs), organic light emitting diodes (OLEDs)), this study proposes a new degradation-based burn-in testing plan for display products exhibiting two-phase degradation patterns. The primary focus of the burn-in test in this study is to eliminate the initial rapid degradation phase, while the major purpose of traditional burn-in tests is to detect and eliminate early failures from weak units. A hierarchical Bayesian bi-exponential model is used to capture two-phase degradation patterns of the burn-in population. Mission reliability and total cost are introduced as planning criteria. The proposed burn-in approach accounts for unit-to-unit variability within the burn-in population, and uncertainty concerning the model parameters, mainly in the hierarchical Bayesian framework. Available pre-burn-in data is conveniently incorporated into the burn-in decision-making procedure. A practical example of PDP degradation data is used to illustrate the proposed methodology. The proposed method is compared to other approaches such as the maximum likelihood method or the change-point regression. - Highlights: • We propose a degradation-based burn-in test for products with two-phase degradation. • Mission reliability and total cost are used as planning criteria. • The proposed burn-in approach is built within the hierarchical Bayesian framework. • A practical example was used to illustrate the proposed methodology.

  20. Hierarchical virtual screening approaches in small molecule drug discovery.

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y J

    2015-01-01

    Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Analyzing Korean consumers’ latent preferences for electricity generation sources with a hierarchical Bayesian logit model in a discrete choice experiment

    International Nuclear Information System (INIS)

    Byun, Hyunsuk; Lee, Chul-Yong

    2017-01-01

    Generally, consumers use electricity without considering the source the electricity was generated from. Since different energy sources exert varying effects on society, it is necessary to analyze consumers’ latent preference for electricity generation sources. The present study estimates Korean consumers’ marginal utility and an appropriate generation mix is derived using the hierarchical Bayesian logit model in a discrete choice experiment. The results show that consumers consider the danger posed by the source of electricity as the most important factor among the effects of electricity generation sources. Additionally, Korean consumers wish to reduce the contribution of nuclear power from the existing 32–11%, and increase that of renewable energy from the existing 4–32%. - Highlights: • We derive an electricity mix reflecting Korean consumers’ latent preferences. • We use the discrete choice experiment and hierarchical Bayesian logit model. • The danger posed by the generation source is the most important attribute. • The consumers wish to increase the renewable energy proportion from 4.3% to 32.8%. • Korea's cost-oriented energy supply policy and consumers’ preference differ markedly.

  2. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    Science.gov (United States)

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  3. Probabilistic Damage Characterization Using the Computationally-Efficient Bayesian Approach

    Science.gov (United States)

    Warner, James E.; Hochhalter, Jacob D.

    2016-01-01

    This work presents a computationally-ecient approach for damage determination that quanti es uncertainty in the provided diagnosis. Given strain sensor data that are polluted with measurement errors, Bayesian inference is used to estimate the location, size, and orientation of damage. This approach uses Bayes' Theorem to combine any prior knowledge an analyst may have about the nature of the damage with information provided implicitly by the strain sensor data to form a posterior probability distribution over possible damage states. The unknown damage parameters are then estimated based on samples drawn numerically from this distribution using a Markov Chain Monte Carlo (MCMC) sampling algorithm. Several modi cations are made to the traditional Bayesian inference approach to provide signi cant computational speedup. First, an ecient surrogate model is constructed using sparse grid interpolation to replace a costly nite element model that must otherwise be evaluated for each sample drawn with MCMC. Next, the standard Bayesian posterior distribution is modi ed using a weighted likelihood formulation, which is shown to improve the convergence of the sampling process. Finally, a robust MCMC algorithm, Delayed Rejection Adaptive Metropolis (DRAM), is adopted to sample the probability distribution more eciently. Numerical examples demonstrate that the proposed framework e ectively provides damage estimates with uncertainty quanti cation and can yield orders of magnitude speedup over standard Bayesian approaches.

  4. Kaolin Quality Prediction from Samples: A Bayesian Network Approach

    International Nuclear Information System (INIS)

    Rivas, T.; Taboada, J.; Ordonez, C.; Matias, J. M.

    2009-01-01

    We describe the results of an expert system applied to the evaluation of samples of kaolin for industrial use in paper or ceramic manufacture. Different machine learning techniques - classification trees, support vector machines and Bayesian networks - were applied with the aim of evaluating and comparing their interpretability and prediction capacities. The predictive capacity of these models for the samples analyzed was highly satisfactory, both for ceramic quality and paper quality. However, Bayesian networks generally proved to be the most useful technique for our study, as this approach combines good predictive capacity with excellent interpretability of the kaolin quality structure, as it graphically represents relationships between variables and facilitates what-if analyses.

  5. Analysis of COSIMA spectra: Bayesian approach

    Directory of Open Access Journals (Sweden)

    H. J. Lehto

    2015-06-01

    secondary ion mass spectrometer (TOF-SIMS spectra. The method is applied to the COmetary Secondary Ion Mass Analyzer (COSIMA TOF-SIMS mass spectra where the analysis can be broken into subgroups of lines close to integer mass values. The effects of the instrumental dead time are discussed in a new way. The method finds the joint probability density functions of measured line parameters (number of lines, and their widths, peak amplitudes, integrated amplitudes and positions. In the case of two or more lines, these distributions can take complex forms. The derived line parameters can be used to further calibrate the mass scaling of TOF-SIMS and to feed the results into other analysis methods such as multivariate analyses of spectra. We intend to use the method, first as a comprehensive tool to perform quantitative analysis of spectra, and second as a fast tool for studying interesting targets for obtaining additional TOF-SIMS measurements of the sample, a property unique to COSIMA. Finally, we point out that the Bayesian method can be thought of as a means to solve inverse problems but with forward calculations, only with no iterative corrections or other manipulation of the observed data.

  6. Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration.

    Science.gov (United States)

    Conner, Mary M; Saunders, W Carl; Bouwes, Nicolaas; Jordan, Chris

    2015-10-01

    Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ≥20 % increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for assessing natural and human-induced impacts for field experiments, the application of hierarchal Bayesian modeling with MCMC sampling to BACI designs is less common. Here, we combine these approaches and extend the typical presentation of results with an easy to interpret ratio, which provides an answer to the main study question-"How much impact did a management action or natural perturbation have?" As an example of this approach, we evaluate the impact of a restoration project, which implemented beaver dam analogs, on survival and density of juvenile steelhead. Results indicated the probabilities of a ≥30 % increase were high for survival and density after the dams were installed, 0.88 and 0.99, respectively, while probabilities for a higher increase of ≥50 % were variable, 0.17 and 0.82, respectively. This approach demonstrates a useful extension of Bayesian methods that can easily be generalized to other study designs from simple (e.g., single factor ANOVA, paired t test) to more complicated block designs (e.g., crossover, split-plot). This approach is valuable for estimating the probabilities of restoration impacts or other management actions.

  7. A hierarchical method for Bayesian inference of rate parameters from shock tube data: Application to the study of the reaction of hydroxyl with 2-methylfuran

    KAUST Repository

    Kim, Daesang

    2017-06-22

    We developed a novel two-step hierarchical method for the Bayesian inference of the rate parameters of a target reaction from time-resolved concentration measurements in shock tubes. The method was applied to the calibration of the parameters of the reaction of hydroxyl with 2-methylfuran, which is studied experimentally via absorption measurements of the OH radical\\'s concentration following shock-heating. In the first step of the approach, each shock tube experiment is treated independently to infer the posterior distribution of the rate constant and error hyper-parameter that best explains the OH signal. In the second step, these posterior distributions are sampled to calibrate the parameters appearing in the Arrhenius reaction model for the rate constant. Furthermore, the second step is modified and repeated in order to explore alternative rate constant models and to assess the effect of uncertainties in the reflected shock\\'s temperature. Comparisons of the estimates obtained via the proposed methodology against the common least squares approach are presented. The relative merits of the novel Bayesian framework are highlighted, especially with respect to the opportunity to utilize the posterior distributions of the parameters in future uncertainty quantification studies.

  8. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  9. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    Science.gov (United States)

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  10. Accurate phenotyping: Reconciling approaches through Bayesian model averaging.

    Directory of Open Access Journals (Sweden)

    Carla Chia-Ming Chen

    Full Text Available Genetic research into complex diseases is frequently hindered by a lack of clear biomarkers for phenotype ascertainment. Phenotypes for such diseases are often identified on the basis of clinically defined criteria; however such criteria may not be suitable for understanding the genetic composition of the diseases. Various statistical approaches have been proposed for phenotype definition; however our previous studies have shown that differences in phenotypes estimated using different approaches have substantial impact on subsequent analyses. Instead of obtaining results based upon a single model, we propose a new method, using Bayesian model averaging to overcome problems associated with phenotype definition. Although Bayesian model averaging has been used in other fields of research, this is the first study that uses Bayesian model averaging to reconcile phenotypes obtained using multiple models. We illustrate the new method by applying it to simulated genetic and phenotypic data for Kofendred personality disorder-an imaginary disease with several sub-types. Two separate statistical methods were used to identify clusters of individuals with distinct phenotypes: latent class analysis and grade of membership. Bayesian model averaging was then used to combine the two clusterings for the purpose of subsequent linkage analyses. We found that causative genetic loci for the disease produced higher LOD scores using model averaging than under either individual model separately. We attribute this improvement to consolidation of the cores of phenotype clusters identified using each individual method.

  11. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....

  12. A hierarchical approach for simulating northern forest dynamics

    Science.gov (United States)

    Don C. Bragg; David W. Roberts; Thomas R. Crow

    2004-01-01

    Complexity in ecological systems has challenged forest simulation modelers for years, resulting in a number of approaches with varying degrees of success. Arguments in favor of hierarchical modeling are made, especially for considering a complex environmental issue like widespread eastern hemlock regeneration failure. We present the philosophy and basic framework for...

  13. Prediction of community prevalence of human onchocerciasis in the Amazonian onchocerciasis focus: Bayesian approach.

    Science.gov (United States)

    Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria

    2003-01-01

    To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic.

  14. Bayesian Poisson hierarchical models for crash data analysis: Investigating the impact of model choice on site-specific predictions.

    Science.gov (United States)

    Khazraee, S Hadi; Johnson, Valen; Lord, Dominique

    2018-08-01

    The Poisson-gamma (PG) and Poisson-lognormal (PLN) regression models are among the most popular means for motor vehicle crash data analysis. Both models belong to the Poisson-hierarchical family of models. While numerous studies have compared the overall performance of alternative Bayesian Poisson-hierarchical models, little research has addressed the impact of model choice on the expected crash frequency prediction at individual sites. This paper sought to examine whether there are any trends among candidate models predictions e.g., that an alternative model's prediction for sites with certain conditions tends to be higher (or lower) than that from another model. In addition to the PG and PLN models, this research formulated a new member of the Poisson-hierarchical family of models: the Poisson-inverse gamma (PIGam). Three field datasets (from Texas, Michigan and Indiana) covering a wide range of over-dispersion characteristics were selected for analysis. This study demonstrated that the model choice can be critical when the calibrated models are used for prediction at new sites, especially when the data are highly over-dispersed. For all three datasets, the PIGam model would predict higher expected crash frequencies than would the PLN and PG models, in order, indicating a clear link between the models predictions and the shape of their mixing distributions (i.e., gamma, lognormal, and inverse gamma, respectively). The thicker tail of the PIGam and PLN models (in order) may provide an advantage when the data are highly over-dispersed. The analysis results also illustrated a major deficiency of the Deviance Information Criterion (DIC) in comparing the goodness-of-fit of hierarchical models; models with drastically different set of coefficients (and thus predictions for new sites) may yield similar DIC values, because the DIC only accounts for the parameters in the lowest (observation) level of the hierarchy and ignores the higher levels (regression coefficients

  15. Comparison of Bayesian and frequentist approaches in modelling risk of preterm birth near the Sydney Tar Ponds, Nova Scotia, Canada

    Directory of Open Access Journals (Sweden)

    Canty Angelo

    2007-09-01

    Full Text Available Abstract Background This study compares the Bayesian and frequentist (non-Bayesian approaches in the modelling of the association between the risk of preterm birth and maternal proximity to hazardous waste and pollution from the Sydney Tar Pond site in Nova Scotia, Canada. Methods The data includes 1604 observed cases of preterm birth out of a total population of 17559 at risk of preterm birth from 144 enumeration districts in the Cape Breton Regional Municipality. Other covariates include the distance from the Tar Pond; the rate of unemployment to population; the proportion of persons who are separated, divorced or widowed; the proportion of persons who have no high school diploma; the proportion of persons living alone; the proportion of single parent families and average income. Bayesian hierarchical Poisson regression, quasi-likelihood Poisson regression and weighted linear regression models were fitted to the data. Results The results of the analyses were compared together with their limitations. Conclusion The results of the weighted linear regression and the quasi-likelihood Poisson regression agrees with the result from the Bayesian hierarchical modelling which incorporates the spatial effects.

  16. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  17. A Dynamic Bayesian Approach to Computational Laban Shape Quality Analysis

    Directory of Open Access Journals (Sweden)

    Dilip Swaminathan

    2009-01-01

    kinesiology. LMA (especially Effort/Shape emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.

  18. Application of Bayesian approach to estimate average level spacing

    International Nuclear Information System (INIS)

    Huang Zhongfu; Zhao Zhixiang

    1991-01-01

    A method to estimate average level spacing from a set of resolved resonance parameters by using Bayesian approach is given. Using the information given in the distributions of both levels spacing and neutron width, the level missing in measured sample can be corrected more precisely so that better estimate for average level spacing can be obtained by this method. The calculation of s-wave resonance has been done and comparison with other work was carried out

  19. Hierarchical Bayesian modelling of mobility metrics for hazard model input calibration

    Science.gov (United States)

    Calder, Eliza; Ogburn, Sarah; Spiller, Elaine; Rutarindwa, Regis; Berger, Jim

    2015-04-01

    In this work we present a method to constrain flow mobility input parameters for pyroclastic flow models using hierarchical Bayes modeling of standard mobility metrics such as H/L and flow volume etc. The advantage of hierarchical modeling is that it can leverage the information in global dataset for a particular mobility metric in order to reduce the uncertainty in modeling of an individual volcano, especially important where individual volcanoes have only sparse datasets. We use compiled pyroclastic flow runout data from Colima, Merapi, Soufriere Hills, Unzen and Semeru volcanoes, presented in an open-source database FlowDat (https://vhub.org/groups/massflowdatabase). While the exact relationship between flow volume and friction varies somewhat between volcanoes, dome collapse flows originating from the same volcano exhibit similar mobility relationships. Instead of fitting separate regression models for each volcano dataset, we use a variation of the hierarchical linear model (Kass and Steffey, 1989). The model presents a hierarchical structure with two levels; all dome collapse flows and dome collapse flows at specific volcanoes. The hierarchical model allows us to assume that the flows at specific volcanoes share a common distribution of regression slopes, then solves for that distribution. We present comparisons of the 95% confidence intervals on the individual regression lines for the data set from each volcano as well as those obtained from the hierarchical model. The results clearly demonstrate the advantage of considering global datasets using this technique. The technique developed is demonstrated here for mobility metrics, but can be applied to many other global datasets of volcanic parameters. In particular, such methods can provide a means to better contain parameters for volcanoes for which we only have sparse data, a ubiquitous problem in volcanology.

  20. Linking bovine tuberculosis on cattle farms to white-tailed deer and environmental variables using Bayesian hierarchical analysis.

    Directory of Open Access Journals (Sweden)

    W David Walter

    Full Text Available Bovine tuberculosis is a bacterial disease caused by Mycobacterium bovis in livestock and wildlife with hosts that include Eurasian badgers (Meles meles, brushtail possum (Trichosurus vulpecula, and white-tailed deer (Odocoileus virginianus. Risk-assessment efforts in Michigan have been initiated on farms to minimize interactions of cattle with wildlife hosts but research on M. bovis on cattle farms has not investigated the spatial context of disease epidemiology. To incorporate spatially explicit data, initial likelihood of infection probabilities for cattle farms tested for M. bovis, prevalence of M. bovis in white-tailed deer, deer density, and environmental variables for each farm were modeled in a Bayesian hierarchical framework. We used geo-referenced locations of 762 cattle farms that have been tested for M. bovis, white-tailed deer prevalence, and several environmental variables that may lead to long-term survival and viability of M. bovis on farms and surrounding habitats (i.e., soil type, habitat type. Bayesian hierarchical analyses identified deer prevalence and proportion of sandy soil within our sampling grid as the most supported model. Analysis of cattle farms tested for M. bovis identified that for every 1% increase in sandy soil resulted in an increase in odds of infection by 4%. Our analysis revealed that the influence of prevalence of M. bovis in white-tailed deer was still a concern even after considerable efforts to prevent cattle interactions with white-tailed deer through on-farm mitigation and reduction in the deer population. Cattle farms test positive for M. bovis annually in our study area suggesting that the potential for an environmental source either on farms or in the surrounding landscape may contributing to new or re-infections with M. bovis. Our research provides an initial assessment of potential environmental factors that could be incorporated into additional modeling efforts as more knowledge of deer herd

  1. QCD sum rules in a Bayesian approach

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2011-01-01

    A novel technique is developed, in which the Maximum Entropy Method is used to analyze QCD sum rules. The main advantage of this approach lies in its ability of directly generating the spectral function of a given operator. This is done without the need of making an assumption about the specific functional form of the spectral function, such as in the 'pole + continuum' ansatz that is frequently used in QCD sum rule studies. Therefore, with this method it should in principle be possible to distinguish narrow pole structures form continuum states. To check whether meaningful results can be extracted within this approach, we have first investigated the vector meson channel, where QCD sum rules are traditionally known to provide a valid description of the spectral function. Our results exhibit a significant peak in the region of the experimentally observed ρ-meson mass, which agrees with earlier QCD sum rules studies and shows that the Maximum Entropy Method is a useful tool for analyzing QCD sum rules.

  2. A bayesian approach to laboratory utilization management

    Directory of Open Access Journals (Sweden)

    Ronald G Hauser

    2015-01-01

    Full Text Available Background: Laboratory utilization management describes a process designed to increase healthcare value by altering requests for laboratory services. A typical approach to monitor and prioritize interventions involves audits of laboratory orders against specific criteria, defined as rule-based laboratory utilization management. This approach has inherent limitations. First, rules are inflexible. They adapt poorly to the ambiguity of medical decision-making. Second, rules judge the context of a decision instead of the patient outcome allowing an order to simultaneously save a life and break a rule. Third, rules can threaten physician autonomy when used in a performance evaluation. Methods: We developed an alternative to rule-based laboratory utilization. The core idea comes from a formula used in epidemiology to estimate disease prevalence. The equation relates four terms: the prevalence of disease, the proportion of positive tests, test sensitivity and test specificity. When applied to a laboratory utilization audit, the formula estimates the prevalence of disease (pretest probability [PTP] in the patients tested. The comparison of PTPs among different providers, provider groups, or patient cohorts produces an objective evaluation of laboratory requests. We demonstrate the model in a review of tests for enterovirus (EV meningitis. Results: The model identified subpopulations within the cohort with a low prevalence of disease. These low prevalence groups shared demographic and seasonal factors known to protect against EV meningitis. This suggests too many orders occurred from patients at low risk for EV. Conclusion: We introduce a new method for laboratory utilization management programs to audit laboratory services.

  3. Bayesian approach to analyzing holograms of colloidal particles.

    Science.gov (United States)

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  4. Hierarchical Bayesian modeling of spatio-temporal patterns of lung cancer incidence risk in Georgia, USA: 2000-2007

    Science.gov (United States)

    Yin, Ping; Mu, Lan; Madden, Marguerite; Vena, John E.

    2014-10-01

    Lung cancer is the second most commonly diagnosed cancer in both men and women in Georgia, USA. However, the spatio-temporal patterns of lung cancer risk in Georgia have not been fully studied. Hierarchical Bayesian models are used here to explore the spatio-temporal patterns of lung cancer incidence risk by race and gender in Georgia for the period of 2000-2007. With the census tract level as the spatial scale and the 2-year period aggregation as the temporal scale, we compare a total of seven Bayesian spatio-temporal models including two under a separate modeling framework and five under a joint modeling framework. One joint model outperforms others based on the deviance information criterion. Results show that the northwest region of Georgia has consistently high lung cancer incidence risk for all population groups during the study period. In addition, there are inverse relationships between the socioeconomic status and the lung cancer incidence risk among all Georgian population groups, and the relationships in males are stronger than those in females. By mapping more reliable variations in lung cancer incidence risk at a relatively fine spatio-temporal scale for different Georgian population groups, our study aims to better support healthcare performance assessment, etiological hypothesis generation, and health policy making.

  5. Hierarchical Bayesian analysis of outcome- and process-based social preferences and beliefs in Dictator Games and sequential Prisoner's Dilemmas.

    Science.gov (United States)

    Aksoy, Ozan; Weesie, Jeroen

    2014-05-01

    In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Hierarchical Bayesian Spatio–Temporal Analysis of Climatic and Socio–Economic Determinants of Rocky Mountain Spotted Fever

    Science.gov (United States)

    Raghavan, Ram K.; Goodin, Douglas G.; Neises, Daniel; Anderson, Gary A.; Ganta, Roman R.

    2016-01-01

    This study aims to examine the spatio-temporal dynamics of Rocky Mountain spotted fever (RMSF) prevalence in four contiguous states of Midwestern United States, and to determine the impact of environmental and socio–economic factors associated with this disease. Bayesian hierarchical models were used to quantify space and time only trends and spatio–temporal interaction effect in the case reports submitted to the state health departments in the region. Various socio–economic, environmental and climatic covariates screened a priori in a bivariate procedure were added to a main–effects Bayesian model in progressive steps to evaluate important drivers of RMSF space-time patterns in the region. Our results show a steady increase in RMSF incidence over the study period to newer geographic areas, and the posterior probabilities of county-specific trends indicate clustering of high risk counties in the central and southern parts of the study region. At the spatial scale of a county, the prevalence levels of RMSF is influenced by poverty status, average relative humidity, and average land surface temperature (>35°C) in the region, and the relevance of these factors in the context of climate–change impacts on tick–borne diseases are discussed. PMID:26942604

  7. Hierarchical Bayesian Spatio-Temporal Analysis of Climatic and Socio-Economic Determinants of Rocky Mountain Spotted Fever.

    Directory of Open Access Journals (Sweden)

    Ram K Raghavan

    Full Text Available This study aims to examine the spatio-temporal dynamics of Rocky Mountain spotted fever (RMSF prevalence in four contiguous states of Midwestern United States, and to determine the impact of environmental and socio-economic factors associated with this disease. Bayesian hierarchical models were used to quantify space and time only trends and spatio-temporal interaction effect in the case reports submitted to the state health departments in the region. Various socio-economic, environmental and climatic covariates screened a priori in a bivariate procedure were added to a main-effects Bayesian model in progressive steps to evaluate important drivers of RMSF space-time patterns in the region. Our results show a steady increase in RMSF incidence over the study period to newer geographic areas, and the posterior probabilities of county-specific trends indicate clustering of high risk counties in the central and southern parts of the study region. At the spatial scale of a county, the prevalence levels of RMSF is influenced by poverty status, average relative humidity, and average land surface temperature (>35°C in the region, and the relevance of these factors in the context of climate-change impacts on tick-borne diseases are discussed.

  8. Hierarchical Bayesian Spatio-Temporal Analysis of Climatic and Socio-Economic Determinants of Rocky Mountain Spotted Fever.

    Science.gov (United States)

    Raghavan, Ram K; Goodin, Douglas G; Neises, Daniel; Anderson, Gary A; Ganta, Roman R

    2016-01-01

    This study aims to examine the spatio-temporal dynamics of Rocky Mountain spotted fever (RMSF) prevalence in four contiguous states of Midwestern United States, and to determine the impact of environmental and socio-economic factors associated with this disease. Bayesian hierarchical models were used to quantify space and time only trends and spatio-temporal interaction effect in the case reports submitted to the state health departments in the region. Various socio-economic, environmental and climatic covariates screened a priori in a bivariate procedure were added to a main-effects Bayesian model in progressive steps to evaluate important drivers of RMSF space-time patterns in the region. Our results show a steady increase in RMSF incidence over the study period to newer geographic areas, and the posterior probabilities of county-specific trends indicate clustering of high risk counties in the central and southern parts of the study region. At the spatial scale of a county, the prevalence levels of RMSF is influenced by poverty status, average relative humidity, and average land surface temperature (>35°C) in the region, and the relevance of these factors in the context of climate-change impacts on tick-borne diseases are discussed.

  9. A bayesian approach to QCD sum rules

    International Nuclear Information System (INIS)

    Gubler, Philipp; Oka, Makoto

    2010-01-01

    QCD sum rules are analyzed with the help of the Maximum Entropy Method. We develop a new technique based on the Bayesion inference theory, which allows us to directly obtain the spectral function of a given correlator from the results of the operator product expansion given in the deep euclidean 4-momentum region. The most important advantage of this approach is that one does not have to make any a priori assumptions about the functional form of the spectral function, such as the 'pole + continuum' ansatz that has been widely used in QCD sum rule studies, but only needs to specify the asymptotic values of the spectral function at high and low energies as an input. As a first test of the applicability of this method, we have analyzed the sum rules of the ρ-meson, a case where the sum rules are known to work well. Our results show a clear peak structure in the region of the experimental mass of the ρ-meson. We thus demonstrate that the Maximum Entropy Method is successfully applied and that it is an efficient tool in the analysis of QCD sum rules. (author)

  10. Application of Bayesian networks in a hierarchical structure for environmental risk assessment: a case study of the Gabric Dam, Iran.

    Science.gov (United States)

    Malekmohammadi, Bahram; Tayebzadeh Moghadam, Negar

    2018-04-13

    Environmental risk assessment (ERA) is a commonly used, effective tool applied to reduce adverse effects of environmental risk factors. In this study, ERA was investigated using the Bayesian network (BN) model based on a hierarchical structure of variables in an influence diagram (ID). ID facilitated ranking of the different alternatives under uncertainty that were then used to evaluate comparisons of the different risk factors. BN was used to present a new model for ERA applicable to complicated development projects such as dam construction. The methodology was applied to the Gabric Dam, in southern Iran. The main environmental risk factors in the region, presented by the Gabric Dam, were identified based on the Delphi technique and specific features of the study area. These included the following: flood, water pollution, earthquake, changes in land use, erosion and sedimentation, effects on the population, and ecosensitivity. These risk factors were then categorized based on results from the output decision node of the BN, including expected utility values for risk factors in the decision node. ERA was performed for the Gabric Dam using the analytical hierarchy process (AHP) method to compare results of BN modeling with those of conventional methods. Results determined that a BN-based hierarchical structure to ERA present acceptable and reasonable risk assessment prioritization in proposing suitable solutions to reduce environmental risks and can be used as a powerful decision support system for evaluating environmental risks.

  11. Overlapping community detection in weighted networks via a Bayesian approach

    Science.gov (United States)

    Chen, Yi; Wang, Xiaolong; Xiang, Xin; Tang, Buzhou; Chen, Qingcai; Fan, Shixi; Bu, Junzhao

    2017-02-01

    Complex networks as a powerful way to represent complex systems have been widely studied during the past several years. One of the most important tasks of complex network analysis is to detect communities embedded in networks. In the real world, weighted networks are very common and may contain overlapping communities where a node is allowed to belong to multiple communities. In this paper, we propose a novel Bayesian approach, called the Bayesian mixture network (BMN) model, to detect overlapping communities in weighted networks. The advantages of our method are (i) providing soft-partition solutions in weighted networks; (ii) providing soft memberships, which quantify 'how strongly' a node belongs to a community. Experiments on a large number of real and synthetic networks show that our model has the ability in detecting overlapping communities in weighted networks and is competitive with other state-of-the-art models at shedding light on community partition.

  12. Hierarchical structure of biological systems: a bioengineering approach.

    Science.gov (United States)

    Alcocer-Cuarón, Carlos; Rivera, Ana L; Castaño, Victor M

    2014-01-01

    A general theory of biological systems, based on few fundamental propositions, allows a generalization of both Wierner and Berthalanffy approaches to theoretical biology. Here, a biological system is defined as a set of self-organized, differentiated elements that interact pair-wise through various networks and media, isolated from other sets by boundaries. Their relation to other systems can be described as a closed loop in a steady-state, which leads to a hierarchical structure and functioning of the biological system. Our thermodynamical approach of hierarchical character can be applied to biological systems of varying sizes through some general principles, based on the exchange of energy information and/or mass from and within the systems.

  13. Remotely Sensed Monitoring of Small Reservoir Dynamics: A Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dirk Eilander

    2014-01-01

    Full Text Available Multipurpose small reservoirs are important for livelihoods in rural semi-arid regions. To manage and plan these reservoirs and to assess their hydrological impact at a river basin scale, it is important to monitor their water storage dynamics. This paper introduces a Bayesian approach for monitoring small reservoirs with radar satellite images. The newly developed growing Bayesian classifier has a high degree of automation, can readily be extended with auxiliary information and reduces the confusion error to the land-water boundary pixels. A case study has been performed in the Upper East Region of Ghana, based on Radarsat-2 data from November 2012 until April 2013. Results show that the growing Bayesian classifier can deal with the spatial and temporal variability in synthetic aperture radar (SAR backscatter intensities from small reservoirs. Due to its ability to incorporate auxiliary information, the algorithm is able to delineate open water from SAR imagery with a low land-water contrast in the case of wind-induced Bragg scattering or limited vegetation on the land surrounding a small reservoir.

  14. Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework.

    Science.gov (United States)

    Lin, Yi-Shin; Heinke, Dietmar; Humphreys, Glyn W

    2015-04-01

    In this study, we applied Bayesian-based distributional analyses to examine the shapes of response time (RT) distributions in three visual search paradigms, which varied in task difficulty. In further analyses we investigated two common observations in visual search-the effects of display size and of variations in search efficiency across different task conditions-following a design that had been used in previous studies (Palmer, Horowitz, Torralba, & Wolfe, Journal of Experimental Psychology: Human Perception and Performance, 37, 58-71, 2011; Wolfe, Palmer, & Horowitz, Vision Research, 50, 1304-1311, 2010) in which parameters of the response distributions were measured. Our study showed that the distributional parameters in an experimental condition can be reliably estimated by moderate sample sizes when Monte Carlo simulation techniques are applied. More importantly, by analyzing trial RTs, we were able to extract paradigm-dependent shape changes in the RT distributions that could be accounted for by using the EZ2 diffusion model. The study showed that Bayesian-based RT distribution analyses can provide an important means to investigate the underlying cognitive processes in search, including stimulus grouping and the bottom-up guidance of attention.

  15. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    International Nuclear Information System (INIS)

    Ellefsen, Karl J.; Smith, David B.

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples. - Highlights: • We evaluate a clustering procedure by applying it to geochemical data. • The procedure generates a hierarchy of clusters. • Different levels of the hierarchy show geochemical processes at different spatial scales. • The clustering method is Bayesian finite mixture modeling. • Model parameters are estimated with Hamiltonian Monte Carlo sampling.

  16. A Bayesian approach to particle identification in ALICE

    Energy Technology Data Exchange (ETDEWEB)

    Wilkinson, Jeremy [Physikalisches Institut, Ruprecht-Karls-Universitaet Heidelberg (Germany); Collaboration: ALICE-Collaboration

    2016-07-01

    Particle identification (PID) is one of the major strengths of the ALICE detector at the LHC, and provides essential insight into quark-gluon plasma formation in heavy-ion collisions. PID is most effective when complementary identification techniques (such as specific energy loss in the Time Projection Chamber, or flight times measured by the Time Of Flight detector) are combined, however with standard PID techniques it can be difficult to combine these signals, especially when detectors with non-Gaussian responses are used. Here, an alternative probabilistic PID approach based on Bayes' theorem will be presented. This method facilitates the combination of different detector technologies based on the combined probability of a particle type to produce the signals measured in various detectors. The Bayesian PID approach will be briefly outlined, and benchmark analyses will be presented for high-purity samples of pions, kaons, and protons, as well as for the two-pronged decay D{sup 0} → K{sup -}π{sup +}, comparing the performance of the standard PID approach with that of the Bayesian approach. Finally, prospects for measuring the Λ{sub c} baryon in the three-pronged decay channel Λ{sub c}{sup +} → pK{sup -}π{sup +} are presented.

  17. A comparison of the Bayesian and frequentist approaches to estimation

    CERN Document Server

    Samaniego, Francisco J

    2010-01-01

    This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st

  18. The subjectivity of scientists and the Bayesian approach

    CERN Document Server

    Press, James S

    2001-01-01

    Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysisScientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often a

  19. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  20. Linking bovine tuberculosis on cattle farms to white-tailed deer and environmental variables using Bayesian hierarchical analysis

    Science.gov (United States)

    Walter, W. David; Smith, Rick; Vanderklok, Mike; VerCauterren, Kurt C.

    2014-01-01

    Bovine tuberculosis is a bacterial disease caused by Mycobacterium bovis in livestock and wildlife with hosts that include Eurasian badgers (Meles meles), brushtail possum (Trichosurus vulpecula), and white-tailed deer (Odocoileus virginianus). Risk-assessment efforts in Michigan have been initiated on farms to minimize interactions of cattle with wildlife hosts but research onM. bovis on cattle farms has not investigated the spatial context of disease epidemiology. To incorporate spatially explicit data, initial likelihood of infection probabilities for cattle farms tested for M. bovis, prevalence of M. bovis in white-tailed deer, deer density, and environmental variables for each farm were modeled in a Bayesian hierarchical framework. We used geo-referenced locations of 762 cattle farms that have been tested for M. bovis, white-tailed deer prevalence, and several environmental variables that may lead to long-term survival and viability of M. bovis on farms and surrounding habitats (i.e., soil type, habitat type). Bayesian hierarchical analyses identified deer prevalence and proportion of sandy soil within our sampling grid as the most supported model. Analysis of cattle farms tested for M. bovisidentified that for every 1% increase in sandy soil resulted in an increase in odds of infection by 4%. Our analysis revealed that the influence of prevalence of M. bovis in white-tailed deer was still a concern even after considerable efforts to prevent cattle interactions with white-tailed deer through on-farm mitigation and reduction in the deer population. Cattle farms test positive for M. bovis annually in our study area suggesting that the potential for an environmental source either on farms or in the surrounding landscape may contributing to new or re-infections with M. bovis. Our research provides an initial assessment of potential environmental factors that could be incorporated into additional modeling efforts as more knowledge of deer herd

  1. A Nonparametric Bayesian Approach For Emission Tomography Reconstruction

    International Nuclear Information System (INIS)

    Barat, Eric; Dautremer, Thomas

    2007-01-01

    We introduce a PET reconstruction algorithm following a nonparametric Bayesian (NPB) approach. In contrast with Expectation Maximization (EM), the proposed technique does not rely on any space discretization. Namely, the activity distribution--normalized emission intensity of the spatial poisson process--is considered as a spatial probability density and observations are the projections of random emissions whose distribution has to be estimated. This approach is nonparametric in the sense that the quantity of interest belongs to the set of probability measures on R k (for reconstruction in k-dimensions) and it is Bayesian in the sense that we define a prior directly on this spatial measure. In this context, we propose to model the nonparametric probability density as an infinite mixture of multivariate normal distributions. As a prior for this mixture we consider a Dirichlet Process Mixture (DPM) with a Normal-Inverse Wishart (NIW) model as base distribution of the Dirichlet Process. As in EM-family reconstruction, we use a data augmentation scheme where the set of hidden variables are the emission locations for each observed line of response in the continuous object space. Thanks to the data augmentation, we propose a Markov Chain Monte Carlo (MCMC) algorithm (Gibbs sampler) which is able to generate draws from the posterior distribution of the spatial intensity. A difference with EM is that one step of the Gibbs sampler corresponds to the generation of emission locations while only the expected number of emissions per pixel/voxel is used in EM. Another key difference is that the estimated spatial intensity is a continuous function such that there is no need to compute a projection matrix. Finally, draws from the intensity posterior distribution allow the estimation of posterior functionnals like the variance or confidence intervals. Results are presented for simulated data based on a 2D brain phantom and compared to Bayesian MAP-EM

  2. A bayesian approach to inferring the genetic population structure of sugarcane accessions from INTA (Argentina

    Directory of Open Access Journals (Sweden)

    Mariana Inés Pocovi

    2015-06-01

    Full Text Available Understanding the population structure and genetic diversity in sugarcane (Saccharum officinarum L. accessions from INTA germplasm bank (Argentina will be of great importance for germplasm collection and breeding improvement as it will identify diverse parental combinations to create segregating progenies with maximum genetic variability for further selection. A Bayesian approach, ordination methods (PCoA, Principal Coordinate Analysis and clustering analysis (UPGMA, Unweighted Pair Group Method with Arithmetic Mean were applied to this purpose. Sixty three INTA sugarcane hybrids were genotyped for 107 Simple Sequence Repeat (SSR and 136 Amplified Fragment Length Polymorphism (AFLP loci. Given the low probability values found with AFLP for individual assignment (4.7%, microsatellites seemed to perform better (54% for STRUCTURE analysis that revealed the germplasm to exist in five optimum groups with partly corresponding to their origin. However clusters shown high degree of admixture, F ST values confirmed the existence of differences among groups. Dissimilarity coefficients ranged from 0.079 to 0.651. PCoA separated sugarcane in groups that did not agree with those identified by STRUCTURE. The clustering including all genotypes neither showed resemblance to populations find by STRUCTURE, but clustering performed considering only individuals displaying a proportional membership > 0.6 in their primary population obtained with STRUCTURE showed close similarities. The Bayesian method indubitably brought more information on cultivar origins than classical PCoA and hierarchical clustering method.

  3. Superhydrophobic hierarchical arrays fabricated by a scalable colloidal lithography approach.

    Science.gov (United States)

    Kothary, Pratik; Dou, Xuan; Fang, Yin; Gu, Zhuxiao; Leo, Sin-Yen; Jiang, Peng

    2017-02-01

    Here we report an unconventional colloidal lithography approach for fabricating a variety of periodic polymer nanostructures with tunable geometries and hydrophobic properties. Wafer-sized, double-layer, non-close-packed silica colloidal crystal embedded in a polymer matrix is first assembled by a scalable spin-coating technology. The unusual non-close-packed crystal structure combined with a thin polymer film separating the top and the bottom colloidal layers render great versatility in templating periodic nanostructures, including arrays of nanovoids, nanorings, and hierarchical nanovoids. These different geometries result in varied fractions of entrapped air in between the templated nanostructures, which in turn lead to different apparent water contact angles. Superhydrophobic surfaces with >150° water contact angles and <5° contact angle hysteresis are achieved on fluorosilane-modified polymer hierarchical nanovoid arrays with large fractions of entrapped air. The experimental contact angle measurements are complemented with theoretical predictions using the Cassie's model to gain insights into the fundamental microstructure-dewetting property relationships. The experimental and theoretical contact angles follow the same trends as determined by the unique hierarchical structures of the templated periodic arrays. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Hierarchical Approach to 'Atomistic' 3-D MOSFET Simulation

    Science.gov (United States)

    Asenov, Asen; Brown, Andrew R.; Davies, John H.; Saini, Subhash

    1999-01-01

    We present a hierarchical approach to the 'atomistic' simulation of aggressively scaled sub-0.1 micron MOSFET's. These devices are so small that their characteristics depend on the precise location of dopant atoms within them, not just on their average density. A full-scale three-dimensional drift-diffusion atomistic simulation approach is first described and used to verify more economical, but restricted, options. To reduce processor time and memory requirements at high drain voltage, we have developed a self-consistent option based on a solution of the current continuity equation restricted to a thin slab of the channel. This is coupled to the solution of the Poisson equation in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison to the full self-consistent solution. At low drain voltage, a single solution of the nonlinear Poisson equation is sufficient to extract the current with satisfactory accuracy. In this case, the current is calculated by solving the current continuity equation in a drift approximation only, also in a thin slab containing the MOSFET channel. The regions of applicability for the different components of this hierarchical approach are illustrated in example simulations covering the random dopant-induced threshold voltage fluctuations, threshold voltage lowering, threshold voltage asymmetry, and drain current fluctuations.

  5. A nonparametric Bayesian approach for genetic evaluation in ...

    African Journals Online (AJOL)

    South African Journal of Animal Science ... the Bayesian and Classical models, a Bayesian procedure is provided which allows these random ... data from the Elsenburg Dormer sheep stud and data from a simulation experiment are utilized. >

  6. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  7. Relative age and birthplace effect in Japanese professional sports: a quantitative evaluation using a Bayesian hierarchical Poisson model.

    Science.gov (United States)

    Ishigami, Hideaki

    2016-01-01

    Relative age effect (RAE) in sports has been well documented. Recent studies investigate the effect of birthplace in addition to the RAE. The first objective of this study was to show the magnitude of the RAE in two major professional sports in Japan, baseball and soccer. Second, we examined the birthplace effect and compared its magnitude with that of the RAE. The effect sizes were estimated using a Bayesian hierarchical Poisson model with the number of players as dependent variable. The RAEs were 9.0% and 7.7% per month for soccer and baseball, respectively. These estimates imply that children born in the first month of a school year have about three times greater chance of becoming a professional player than those born in the last month of the year. Over half of the difference in likelihoods of becoming a professional player between birthplaces was accounted for by weather conditions, with the likelihood decreasing by 1% per snow day. An effect of population size was not detected in the data. By investigating different samples, we demonstrated that using quarterly data leads to underestimation and that the age range of sampled athletes should be set carefully.

  8. A multi-level hierarchic Markov process with Bayesian updating for herd optimization and simulation in dairy cattle.

    Science.gov (United States)

    Demeter, R M; Kristensen, A R; Dijkstra, J; Oude Lansink, A G J M; Meuwissen, M P M; van Arendonk, J A M

    2011-12-01

    Herd optimization models that determine economically optimal insemination and replacement decisions are valuable research tools to study various aspects of farming systems. The aim of this study was to develop a herd optimization and simulation model for dairy cattle. The model determines economically optimal insemination and replacement decisions for individual cows and simulates whole-herd results that follow from optimal decisions. The optimization problem was formulated as a multi-level hierarchic Markov process, and a state space model with Bayesian updating was applied to model variation in milk yield. Methodological developments were incorporated in 2 main aspects. First, we introduced an additional level to the model hierarchy to obtain a more tractable and efficient structure. Second, we included a recently developed cattle feed intake model. In addition to methodological developments, new parameters were used in the state space model and other biological functions. Results were generated for Dutch farming conditions, and outcomes were in line with actual herd performance in the Netherlands. Optimal culling decisions were sensitive to variation in milk yield but insensitive to energy requirements for maintenance and feed intake capacity. We anticipate that the model will be applied in research and extension. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. A Bayesian approach for quantification of model uncertainty

    International Nuclear Information System (INIS)

    Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.

    2010-01-01

    In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.

  10. Bayesian approach in MN low dose of radiation counting

    International Nuclear Information System (INIS)

    Serna Berna, A.; Alcaraz, M.; Acevedo, C.; Navarro, J. L.; Alcanzar, M. D.; Canteras, M.

    2006-01-01

    The Micronucleus assay in lymphocytes is a well established technique for the assessment of genetic damage induced by ionizing radiation. Due to the presence of a natural background of MN the net MN is obtained by subtracting this value to the gross value. When very low doses of radiation are given the induced MN is close even lower than the predetermined background value. Furthermore, the damage distribution induced by the radiation follows a Poisson probability distribution. These two facts pose a difficult task to obtain the net counting rate in the exposed situations. It is possible to overcome this problem using a bayesian approach, in which the selection of a priori distributions for the background and net counting rate plays an important role. In the present work we make a detailed analysed using bayesian theory to infer the net counting rate in two different situations: a) when the background is known for an individual sample, using exact value value for the background and Jeffreys prior for the net counting rate, and b) when the background is not known and we make use of a population background distribution as background prior function and constant prior for the net counting rate. (Author)

  11. Assessing high reliability via Bayesian approach and accelerated tests

    International Nuclear Information System (INIS)

    Erto, Pasquale; Giorgio, Massimiliano

    2002-01-01

    Sometimes the assessment of very high reliability levels is difficult for the following main reasons: - the high reliability level of each item makes it impossible to obtain, in a reasonably short time, a sufficient number of failures; - the high cost of the high reliability items to submit to life tests makes it unfeasible to collect enough data for 'classical' statistical analyses. In the above context, this paper presents a Bayesian solution to the problem of estimation of the parameters of the Weibull-inverse power law model, on the basis of a limited number (say six) of life tests, carried out at different stress levels, all higher than the normal one. The over-stressed (i.e. accelerated) tests allow the use of experimental data obtained in a reasonably short time. The Bayesian approach enables one to reduce the required number of failures adding to the failure information the available a priori engineers' knowledge. This engineers' involvement conforms to the most advanced management policy that aims at involving everyone's commitment in order to obtain total quality. A Monte Carlo study of the non-asymptotic properties of the proposed estimators and a comparison with the properties of maximum likelihood estimators closes the work

  12. A Bayesian Approach for Sensor Optimisation in Impact Identification

    Directory of Open Access Journals (Sweden)

    Vincenzo Mallardo

    2016-11-01

    Full Text Available This paper presents a Bayesian approach for optimizing the position of sensors aimed at impact identification in composite structures under operational conditions. The uncertainty in the sensor data has been represented by statistical distributions of the recorded signals. An optimisation strategy based on the genetic algorithm is proposed to find the best sensor combination aimed at locating impacts on composite structures. A Bayesian-based objective function is adopted in the optimisation procedure as an indicator of the performance of meta-models developed for different sensor combinations to locate various impact events. To represent a real structure under operational load and to increase the reliability of the Structural Health Monitoring (SHM system, the probability of malfunctioning sensors is included in the optimisation. The reliability and the robustness of the procedure is tested with experimental and numerical examples. Finally, the proposed optimisation algorithm is applied to a composite stiffened panel for both the uniform and non-uniform probability of impact occurrence.

  13. Bayesian approach for peak detection in two-dimensional chromatography.

    Science.gov (United States)

    Vivó-Truyols, Gabriel

    2012-03-20

    A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.

  14. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    Science.gov (United States)

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

  15. A hierarchical approach to reducing communication in parallel graph algorithms

    KAUST Repository

    Harshvardhan,

    2015-01-01

    Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.

  16. A Bayesian Approach to Real-Time Earthquake Phase Association

    Science.gov (United States)

    Benz, H.; Johnson, C. E.; Earle, P. S.; Patton, J. M.

    2014-12-01

    Real-time location of seismic events requires a robust and extremely efficient means of associating and identifying seismic phases with hypothetical sources. An association algorithm converts a series of phase arrival times into a catalog of earthquake hypocenters. The classical approach based on time-space stacking of the locus of possible hypocenters for each phase arrival using the principal of acoustic reciprocity has been in use now for many years. One of the most significant problems that has emerged over time with this approach is related to the extreme variations in seismic station density throughout the global seismic network. To address this problem we have developed a novel, Bayesian association algorithm, which looks at the association problem as a dynamically evolving complex system of "many to many relationships". While the end result must be an array of one to many relations (one earthquake, many phases), during the association process the situation is quite different. Both the evolving possible hypocenters and the relationships between phases and all nascent hypocenters is many to many (many earthquakes, many phases). The computational framework we are using to address this is a responsive, NoSQL graph database where the earthquake-phase associations are represented as intersecting Bayesian Learning Networks. The approach directly addresses the network inhomogeneity issue while at the same time allowing the inclusion of other kinds of data (e.g., seismic beams, station noise characteristics, priors on estimated location of the seismic source) by representing the locus of intersecting hypothetical loci for a given datum as joint probability density functions.

  17. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  18. A Robust Obstacle Avoidance for Service Robot Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2011-03-01

    Full Text Available The objective of this paper is to propose a robust obstacle avoidance method for service robot in indoor environment. The method for obstacles avoidance uses information about static obstacles on the landmark using edge detection. Speed and direction of people that walks as moving obstacle obtained by single camera using tracking and recognition system and distance measurement using 3 ultrasonic sensors. A new geometrical model and maneuvering method for moving obstacle avoidance introduced and combined with Bayesian approach for state estimation. The obstacle avoidance problem is formulated using decision theory, prior and posterior distribution and loss function to determine an optimal response based on inaccurate sensor data. Algorithms for moving obstacles avoidance method proposed and experiment results implemented to service robot also presented. Various experiments show that our proposed method very fast, robust and successfully implemented to service robot called Srikandi II that equipped with 4 DOF arm robot developed in our laboratory.

  19. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  20. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  1. Effects of management intervention on post-disturbance community composition: an experimental analysis using bayesian hierarchical models.

    Directory of Open Access Journals (Sweden)

    Jack Giovanini

    Full Text Available As human demand for ecosystem products increases, management intervention may become more frequent after environmental disturbances. Evaluations of ecological responses to cumulative effects of management interventions and natural disturbances provide critical decision-support tools for managers who strive to balance environmental conservation and economic development. We conducted an experiment to evaluate the effects of salvage logging on avian community composition in lodgepole pine (Pinus contorta forests affected by beetle outbreaks in Oregon, USA, 1996-1998. Treatments consisted of the removal of lodgepole pine snags only, and live trees were not harvested. We used a bayesian hierarchical model to quantify occupancy dynamics for 27 breeding species, while accounting for variation in the detection process. We examined how magnitude and precision of treatment effects varied when incorporating prior information from a separate intervention study that occurred in a similar ecological system. Regardless of which prior we evaluated, we found no evidence that the harvest treatment had a negative impact on species richness, with an estimated average of 0.2-2.2 more species in harvested stands than unharvested stands. Estimated average similarity between control and treatment stands ranged from 0.82-0.87 (1 indicating complete similarity between a pair of stands and suggested that treatment stands did not contain novel assemblies of species responding to the harvesting prescription. Estimated treatment effects were positive for twenty-four (90% of the species, although the credible intervals contained 0 in all cases. These results suggest that, unlike most post-fire salvage logging prescriptions, selective harvesting after beetle outbreaks may meet multiple management objectives, including the maintenance of avian community richness comparable to what is found in unharvested stands. Our results provide managers with prescription alternatives to

  2. A hierarchical Bayesian spatio-temporal model for extreme precipitation events

    KAUST Repository

    Ghosh, Souparno; Mallick, Bani K.

    2011-01-01

    We propose a new approach to model a sequence of spatially distributed time series of extreme values. Unlike common practice, we incorporate spatial dependence directly in the likelihood and allow the temporal component to be captured at the second level of hierarchy. Inferences about the parameters and spatio-temporal predictions are obtained via MCMC technique. The model is fitted to a gridded precipitation data set collected over 99 years across the continental U.S. © 2010 John Wiley & Sons, Ltd..

  3. A hierarchical Bayesian spatio-temporal model for extreme precipitation events

    KAUST Repository

    Ghosh, Souparno

    2011-03-01

    We propose a new approach to model a sequence of spatially distributed time series of extreme values. Unlike common practice, we incorporate spatial dependence directly in the likelihood and allow the temporal component to be captured at the second level of hierarchy. Inferences about the parameters and spatio-temporal predictions are obtained via MCMC technique. The model is fitted to a gridded precipitation data set collected over 99 years across the continental U.S. © 2010 John Wiley & Sons, Ltd..

  4. Bayesian inference with ecological applications

    CERN Document Server

    Link, William A

    2009-01-01

    This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...

  5. Global Crop Monitoring: A Satellite-Based Hierarchical Approach

    Directory of Open Access Journals (Sweden)

    Bingfang Wu

    2015-04-01

    Full Text Available Taking advantage of multiple new remote sensing data sources, especially from Chinese satellites, the CropWatch system has expanded the scope of its international analyses through the development of new indicators and an upgraded operational methodology. The approach adopts a hierarchical system covering four spatial levels of detail: global, regional, national (thirty-one key countries including China and “sub-countries” (for the nine largest countries. The thirty-one countries encompass more that 80% of both production and exports of maize, rice, soybean and wheat. The methodology resorts to climatic and remote sensing indicators at different scales. The global patterns of crop environmental growing conditions are first analyzed with indicators for rainfall, temperature, photosynthetically active radiation (PAR as well as potential biomass. At the regional scale, the indicators pay more attention to crops and include Vegetation Health Index (VHI, Vegetation Condition Index (VCI, Cropped Arable Land Fraction (CALF as well as Cropping Intensity (CI. Together, they characterize crop situation, farming intensity and stress. CropWatch carries out detailed crop condition analyses at the national scale with a comprehensive array of variables and indicators. The Normalized Difference Vegetation Index (NDVI, cropped areas and crop conditions are integrated to derive food production estimates. For the nine largest countries, CropWatch zooms into the sub-national units to acquire detailed information on crop condition and production by including new indicators (e.g., Crop type proportion. Based on trend analysis, CropWatch also issues crop production supply outlooks, covering both long-term variations and short-term dynamic changes in key food exporters and importers. The hierarchical approach adopted by CropWatch is the basis of the analyses of climatic and crop conditions assessments published in the quarterly “CropWatch bulletin” which

  6. Drug-target interaction prediction: A Bayesian ranking approach.

    Science.gov (United States)

    Peska, Ladislav; Buza, Krisztian; Koller, Júlia

    2017-12-01

    In silico prediction of drug-target interactions (DTI) could provide valuable information and speed-up the process of drug repositioning - finding novel usage for existing drugs. In our work, we focus on machine learning algorithms supporting drug-centric repositioning approach, which aims to find novel usage for existing or abandoned drugs. We aim at proposing a per-drug ranking-based method, which reflects the needs of drug-centric repositioning research better than conventional drug-target prediction approaches. We propose Bayesian Ranking Prediction of Drug-Target Interactions (BRDTI). The method is based on Bayesian Personalized Ranking matrix factorization (BPR) which has been shown to be an excellent approach for various preference learning tasks, however, it has not been used for DTI prediction previously. In order to successfully deal with DTI challenges, we extended BPR by proposing: (i) the incorporation of target bias, (ii) a technique to handle new drugs and (iii) content alignment to take structural similarities of drugs and targets into account. Evaluation on five benchmark datasets shows that BRDTI outperforms several state-of-the-art approaches in terms of per-drug nDCG and AUC. BRDTI results w.r.t. nDCG are 0.929, 0.953, 0.948, 0.897 and 0.690 for G-Protein Coupled Receptors (GPCR), Ion Channels (IC), Nuclear Receptors (NR), Enzymes (E) and Kinase (K) datasets respectively. Additionally, BRDTI significantly outperformed other methods (BLM-NII, WNN-GIP, NetLapRLS and CMF) w.r.t. nDCG in 17 out of 20 cases. Furthermore, BRDTI was also shown to be able to predict novel drug-target interactions not contained in the original datasets. The average recall at top-10 predicted targets for each drug was 0.762, 0.560, 1.000 and 0.404 for GPCR, IC, NR, and E datasets respectively. Based on the evaluation, we can conclude that BRDTI is an appropriate choice for researchers looking for an in silico DTI prediction technique to be used in drug

  7. bayesQR: A Bayesian Approach to Quantile Regression

    Directory of Open Access Journals (Sweden)

    Dries F. Benoit

    2017-01-01

    Full Text Available After its introduction by Koenker and Basset (1978, quantile regression has become an important and popular tool to investigate the conditional response distribution in regression. The R package bayesQR contains a number of routines to estimate quantile regression parameters using a Bayesian approach based on the asymmetric Laplace distribution. The package contains functions for the typical quantile regression with continuous dependent variable, but also supports quantile regression for binary dependent variables. For both types of dependent variables, an approach to variable selection using the adaptive lasso approach is provided. For the binary quantile regression model, the package also contains a routine that calculates the fitted probabilities for each vector of predictors. In addition, functions for summarizing the results, creating traceplots, posterior histograms and drawing quantile plots are included. This paper starts with a brief overview of the theoretical background of the models used in the bayesQR package. The main part of this paper discusses the computational problems that arise in the implementation of the procedure and illustrates the usefulness of the package through selected examples.

  8. A Bayesian approach to spectral quantitative photoacoustic tomography

    International Nuclear Information System (INIS)

    Pulkkinen, A; Kaipio, J P; Tarvainen, T; Cox, B T; Arridge, S R

    2014-01-01

    A Bayesian approach to the optical reconstruction problem associated with spectral quantitative photoacoustic tomography is presented. The approach is derived for commonly used spectral tissue models of optical absorption and scattering: the absorption is described as a weighted sum of absorption spectra of known chromophores (spatially dependent chromophore concentrations), while the scattering is described using Mie scattering theory, with the proportionality constant and spectral power law parameter both spatially-dependent. It is validated using two-dimensional test problems composed of three biologically relevant chromophores: fat, oxygenated blood and deoxygenated blood. Using this approach it is possible to estimate the Grüneisen parameter, the absolute chromophore concentrations, and the Mie scattering parameters associated with spectral photoacoustic tomography problems. In addition, the direct estimation of the spectral parameters is compared to estimates obtained by fitting the spectral parameters to estimates of absorption, scattering and Grüneisen parameter at the investigated wavelengths. It is shown with numerical examples that the direct estimation results in better accuracy of the estimated parameters. (papers)

  9. Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation

    Science.gov (United States)

    Ross, Steven J.; Mackey, Beth

    2015-01-01

    This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…

  10. Constraining East Antarctic mass trends using a Bayesian inference approach

    Science.gov (United States)

    Martin-Español, Alba; Bamber, Jonathan L.

    2016-04-01

    East Antarctica is an order of magnitude larger than its western neighbour and the Greenland ice sheet. It has the greatest potential to contribute to sea level rise of any source, including non-glacial contributors. It is, however, the most challenging ice mass to constrain because of a range of factors including the relative paucity of in-situ observations and the poor signal to noise ratio of Earth Observation data such as satellite altimetry and gravimetry. A recent study using satellite radar and laser altimetry (Zwally et al. 2015) concluded that the East Antarctic Ice Sheet (EAIS) had been accumulating mass at a rate of 136±28 Gt/yr for the period 2003-08. Here, we use a Bayesian hierarchical model, which has been tested on, and applied to, the whole of Antarctica, to investigate the impact of different assumptions regarding the origin of elevation changes of the EAIS. We combined GRACE, satellite laser and radar altimeter data and GPS measurements to solve simultaneously for surface processes (primarily surface mass balance, SMB), ice dynamics and glacio-isostatic adjustment over the period 2003-13. The hierarchical model partitions mass trends between SMB and ice dynamics based on physical principles and measures of statistical likelihood. Without imposing the division between these processes, the model apportions about a third of the mass trend to ice dynamics, +18 Gt/yr, and two thirds, +39 Gt/yr, to SMB. The total mass trend for that period for the EAIS was 57±20 Gt/yr. Over the period 2003-08, we obtain an ice dynamic trend of 12 Gt/yr and a SMB trend of 15 Gt/yr, with a total mass trend of 27 Gt/yr. We then imposed the condition that the surface mass balance is tightly constrained by the regional climate model RACMO2.3 and allowed height changes due to ice dynamics to occur in areas of low surface velocities (solution that satisfies all the input data, given these constraints. By imposing these conditions, over the period 2003-13 we obtained a mass

  11. Reliability assessment using degradation models: bayesian and classical approaches

    Directory of Open Access Journals (Sweden)

    Marta Afonso Freitas

    2010-04-01

    Full Text Available Traditionally, reliability assessment of devices has been based on (accelerated life tests. However, for highly reliable products, little information about reliability is provided by life tests in which few or no failures are typically observed. Since most failures arise from a degradation mechanism at work for which there are characteristics that degrade over time, one alternative is monitor the device for a period of time and assess its reliability from the changes in performance (degradation observed during that period. The goal of this article is to illustrate how degradation data can be modeled and analyzed by using "classical" and Bayesian approaches. Four methods of data analysis based on classical inference are presented. Next we show how Bayesian methods can also be used to provide a natural approach to analyzing degradation data. The approaches are applied to a real data set regarding train wheels degradation.Tradicionalmente, o acesso à confiabilidade de dispositivos tem sido baseado em testes de vida (acelerados. Entretanto, para produtos altamente confiáveis, pouca informação a respeito de sua confiabilidade é fornecida por testes de vida no quais poucas ou nenhumas falhas são observadas. Uma vez que boa parte das falhas é induzida por mecanismos de degradação, uma alternativa é monitorar o dispositivo por um período de tempo e acessar sua confiabilidade através das mudanças em desempenho (degradação observadas durante aquele período. O objetivo deste artigo é ilustrar como dados de degradação podem ser modelados e analisados utilizando-se abordagens "clássicas" e Bayesiana. Quatro métodos de análise de dados baseados em inferência clássica são apresentados. A seguir, mostramos como os métodos Bayesianos podem também ser aplicados para proporcionar uma abordagem natural à análise de dados de degradação. As abordagens são aplicadas a um banco de dados real relacionado à degradação de rodas de trens.

  12. Modelling of population dynamics of red king crab using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Bakanev Sergey ...

    2012-10-01

    Modeling population dynamics based on the Bayesian approach enables to successfully resolve the above issues. The integration of the data from various studies into a unified model based on Bayesian parameter estimation method provides a much more detailed description of the processes occurring in the population.

  13. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  14. A Bayesian approach to extracting meaning from system behavior

    Energy Technology Data Exchange (ETDEWEB)

    Dress, W.B.

    1998-08-01

    The modeling relation and its reformulation to include the semiotic hierarchy is essential for the understanding, control, and successful re-creation of natural systems. This presentation will argue for a careful application of Rosen`s modeling relationship to the problems of intelligence and autonomy in natural and artificial systems. To this end, the authors discuss the essential need for a correct theory of induction, learning, and probability; and suggest that modern Bayesian probability theory, developed by Cox, Jaynes, and others, can adequately meet such demands, especially on the operational level of extracting meaning from observations. The methods of Bayesian and maximum Entropy parameter estimation have been applied to measurements of system observables to directly infer the underlying differential equations generating system behavior. This approach by-passes the usual method of parameter estimation based on assuming a functional form for the observable and then estimating the parameters that would lead to the particular observed behavior. The computational savings is great since only location parameters enter into the maximum-entropy calculations; this innovation finesses the need for nonlinear parameters altogether. Such an approach more directly extracts the semantics inherent in a given system by going to the root of system meaning as expressed by abstract form or shape, rather than in syntactic particulars, such as signal amplitude and phase. Examples will be shown how the form of a system can be followed while ignoring unnecessary details. In this sense, the authors are observing the meaning of the words rather than being concerned with their particular expression or language. For the present discussion, empirical models are embodied by the differential equations underlying, producing, or describing the behavior of a process as measured or tracked by a particular variable set--the observables. The a priori models are probability structures that

  15. Hierarchical Multinomial Processing Tree Models: A Latent-Trait Approach

    Science.gov (United States)

    Klauer, Karl Christoph

    2010-01-01

    Multinomial processing tree models are widely used in many areas of psychology. A hierarchical extension of the model class is proposed, using a multivariate normal distribution of person-level parameters with the mean and covariance matrix to be estimated from the data. The hierarchical model allows one to take variability between persons into…

  16. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...

  17. Applied Bayesian hierarchical methods

    National Research Council Canada - National Science Library

    Congdon, P

    2010-01-01

    .... It also incorporates BayesX code, which is particularly useful in nonlinear regression. To demonstrate MCMC sampling from first principles, the author includes worked examples using the R package...

  18. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  19. A Bayesian approach to the modelling of α Cen A

    Science.gov (United States)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  20. Parameter Estimation of Structural Equation Modeling Using Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Dewi Kurnia Sari

    2016-05-01

    Full Text Available Leadership is a process of influencing, directing or giving an example of employees in order to achieve the objectives of the organization and is a key element in the effectiveness of the organization. In addition to the style of leadership, the success of an organization or company in achieving its objectives can also be influenced by the commitment of the organization. Where organizational commitment is a commitment created by each individual for the betterment of the organization. The purpose of this research is to obtain a model of leadership style and organizational commitment to job satisfaction and employee performance, and determine the factors that influence job satisfaction and employee performance using SEM with Bayesian approach. This research was conducted at Statistics FNI employees in Malang, with 15 people. The result of this study showed that the measurement model, all significant indicators measure each latent variable. Meanwhile in the structural model, it was concluded there are a significant difference between the variables of Leadership Style and Organizational Commitment toward Job Satisfaction directly as well as a significant difference between Job Satisfaction on Employee Performance. As for the influence of Leadership Style and variable Organizational Commitment on Employee Performance directly declared insignificant.

  1. A Bayesian decision approach to rainfall thresholds based flood warning

    Directory of Open Access Journals (Sweden)

    M. L. V. Martina

    2006-01-01

    Full Text Available Operational real time flood forecasting systems generally require a hydrological model to run in real time as well as a series of hydro-informatics tools to transform the flood forecast into relatively simple and clear messages to the decision makers involved in flood defense. The scope of this paper is to set forth the possibility of providing flood warnings at given river sections based on the direct comparison of the quantitative precipitation forecast with critical rainfall threshold values, without the need of an on-line real time forecasting system. This approach leads to an extremely simplified alert system to be used by non technical stakeholders and could also be used to supplement the traditional flood forecasting systems in case of system failures. The critical rainfall threshold values, incorporating the soil moisture initial conditions, result from statistical analyses using long hydrological time series combined with a Bayesian utility function minimization. In the paper, results of an application of the proposed methodology to the Sieve river, a tributary of the Arno river in Italy, are given to exemplify its practical applicability.

  2. A nonparametric Bayesian approach for genetic evaluation in ...

    African Journals Online (AJOL)

    Unknown

    Finally, one can report the whole of the posterior probability distributions of the parameters in ... the Markov Chain Monte Carlo Methods, and more specific Gibbs Sampling, these ...... Bayesian Methods in Animal Breeding Theory. J. Anim. Sci.

  3. Evolution of Subjective Hurricane Risk Perceptions: A Bayesian Approach

    OpenAIRE

    David Kelly; David Letson; Forest Nelson; David S. Nolan; Daniel Solis

    2009-01-01

    This paper studies how individuals update subjective risk perceptions in response to hurricane track forecast information, using a unique data set from an event market, the Hurricane Futures Market (HFM). We derive a theoretical Bayesian framework which predicts how traders update their perceptions of the probability of a hurricane making landfall in a certain range of coastline. Our results suggest that traders behave in a way consistent with Bayesian updating but this behavior is based on t...

  4. Bayesian Population Physiologically-Based Pharmacokinetic (PBPK Approach for a Physiologically Realistic Characterization of Interindividual Variability in Clinically Relevant Populations.

    Directory of Open Access Journals (Sweden)

    Markus Krauss

    Full Text Available Interindividual variability in anatomical and physiological properties results in significant differences in drug pharmacokinetics. The consideration of such pharmacokinetic variability supports optimal drug efficacy and safety for each single individual, e.g. by identification of individual-specific dosings. One clear objective in clinical drug development is therefore a thorough characterization of the physiological sources of interindividual variability. In this work, we present a Bayesian population physiologically-based pharmacokinetic (PBPK approach for the mechanistically and physiologically realistic identification of interindividual variability. The consideration of a generic and highly detailed mechanistic PBPK model structure enables the integration of large amounts of prior physiological knowledge, which is then updated with new experimental data in a Bayesian framework. A covariate model integrates known relationships of physiological parameters to age, gender and body height. We further provide a framework for estimation of the a posteriori parameter dependency structure at the population level. The approach is demonstrated considering a cohort of healthy individuals and theophylline as an application example. The variability and co-variability of physiological parameters are specified within the population; respectively. Significant correlations are identified between population parameters and are applied for individual- and population-specific visual predictive checks of the pharmacokinetic behavior, which leads to improved results compared to present population approaches. In the future, the integration of a generic PBPK model into an hierarchical approach allows for extrapolations to other populations or drugs, while the Bayesian paradigm allows for an iterative application of the approach and thereby a continuous updating of physiological knowledge with new data. This will facilitate decision making e.g. from preclinical to

  5. Estimating mental states of a depressed person with bayesian networks

    NARCIS (Netherlands)

    Klein, Michel C.A.; Modena, Gabriele

    2013-01-01

    In this work in progress paper we present an approach based on Bayesian Networks to model the relationship between mental states and empirical observations in a depressed person. We encode relationships and domain expertise as a Hierarchical Bayesian Network. Mental states are represented as latent

  6. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  7. A bayesian approach to classification criteria for spectacled eiders

    Science.gov (United States)

    Taylor, B.L.; Wade, P.R.; Stehn, R.A.; Cochrane, J.F.

    1996-01-01

    To facilitate decisions to classify species according to risk of extinction, we used Bayesian methods to analyze trend data for the Spectacled Eider, an arctic sea duck. Trend data from three independent surveys of the Yukon-Kuskokwim Delta were analyzed individually and in combination to yield posterior distributions for population growth rates. We used classification criteria developed by the recovery team for Spectacled Eiders that seek to equalize errors of under- or overprotecting the species. We conducted both a Bayesian decision analysis and a frequentist (classical statistical inference) decision analysis. Bayesian decision analyses are computationally easier, yield basically the same results, and yield results that are easier to explain to nonscientists. With the exception of the aerial survey analysis of the 10 most recent years, both Bayesian and frequentist methods indicated that an endangered classification is warranted. The discrepancy between surveys warrants further research. Although the trend data are abundance indices, we used a preliminary estimate of absolute abundance to demonstrate how to calculate extinction distributions using the joint probability distributions for population growth rate and variance in growth rate generated by the Bayesian analysis. Recent apparent increases in abundance highlight the need for models that apply to declining and then recovering species.

  8. When mechanism matters: Bayesian forecasting using models of ecological diffusion

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Russell, Robin E.; Walsh, Daniel P.; Powell, James A.

    2017-01-01

    Ecological diffusion is a theory that can be used to understand and forecast spatio-temporal processes such as dispersal, invasion, and the spread of disease. Hierarchical Bayesian modelling provides a framework to make statistical inference and probabilistic forecasts, using mechanistic ecological models. To illustrate, we show how hierarchical Bayesian models of ecological diffusion can be implemented for large data sets that are distributed densely across space and time. The hierarchical Bayesian approach is used to understand and forecast the growth and geographic spread in the prevalence of chronic wasting disease in white-tailed deer (Odocoileus virginianus). We compare statistical inference and forecasts from our hierarchical Bayesian model to phenomenological regression-based methods that are commonly used to analyse spatial occurrence data. The mechanistic statistical model based on ecological diffusion led to important ecological insights, obviated a commonly ignored type of collinearity, and was the most accurate method for forecasting.

  9. Basics of Bayesian methods.

    Science.gov (United States)

    Ghosh, Sujit K

    2010-01-01

    Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.

  10. Bayesian approach in the power electric systems study of reliability ...

    African Journals Online (AJOL)

    Subsequently, Bayesian methodologies are framed in an ampler problem list, based on the definition of an opportune "vector of state" and of a vector describing the system performances, aiming to the definition and the calculation or the estimation of system reliability. The purpose of our work is to establish a useful model ...

  11. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  12. Hierarchical emc analysis approach for power electronics applications

    NARCIS (Netherlands)

    Zhao, D.; Ferreira, B.; Roc'h, A.; Leferink, Frank Bernardus Johannes

    2008-01-01

    In this paper, a novel method for EMI (Electromagnetic Interference) level prediction is proposed. The method is based on the hierarchical structure of the generation of EMI. That is, the determination of EMI level can be divided into three levels, namely the functional level, the transient level

  13. Bayesian probabilistic network approach for managing earthquake risks of cities

    DEFF Research Database (Denmark)

    Bayraktarli, Yahya; Faber, Michael

    2011-01-01

    This paper considers the application of Bayesian probabilistic networks (BPNs) to large-scale risk based decision making in regard to earthquake risks. A recently developed risk management framework is outlined which utilises Bayesian probabilistic modelling, generic indicator based risk models...... and a fourth module on the consequences of an earthquake. Each of these modules is integrated into a BPN. Special attention is given to aggregated risk, i.e. the risk contribution from assets at multiple locations in a city subjected to the same earthquake. The application of the methodology is illustrated...... on an example considering a portfolio of reinforced concrete structures in a city located close to the western part of the North Anatolian Fault in Turkey....

  14. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  15. Social Influence on Information Technology Adoption and Sustained Use in Healthcare: A Hierarchical Bayesian Learning Method Analysis

    Science.gov (United States)

    Hao, Haijing

    2013-01-01

    Information technology adoption and diffusion is currently a significant challenge in the healthcare delivery setting. This thesis includes three papers that explore social influence on information technology adoption and sustained use in the healthcare delivery environment using conventional regression models and novel hierarchical Bayesian…

  16. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    Science.gov (United States)

    Gogu, C.; Haftka, R.; LeRiche, R.; Molimard, J.; Vautrin, A.; Sankar, B.

    2008-11-01

    The basic formulation of the least squares method, based on the L2 norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  17. Bayesian approach for the reliability assessment of corroded interdependent pipe networks

    International Nuclear Information System (INIS)

    Ait Mokhtar, El Hassene; Chateauneuf, Alaa; Laggoune, Radouane

    2016-01-01

    Pipelines under corrosion are subject to various environment conditions, and consequently it becomes difficult to build realistic corrosion models. In the present work, a Bayesian methodology is proposed to allow for updating the corrosion model parameters according to the evolution of environmental conditions. For reliability assessment of dependent structures, Bayesian networks are used to provide interesting qualitative and quantitative description of the information in the system. The qualitative contribution lies in the modeling of complex system, composed by dependent pipelines, as a Bayesian network. The quantitative one lies in the evaluation of the dependencies between pipelines by the use of a new method for the generation of conditional probability tables. The effectiveness of Bayesian updating is illustrated through an application where the new reliability of degraded (corroded) pipe networks is assessed. - Highlights: • A methodology for Bayesian network modeling of pipe networks is proposed. • Bayesian approach based on Metropolis - Hastings algorithm is conducted for corrosion model updating. • The reliability of corroded pipe network is assessed by considering the interdependencies between the pipelines.

  18. An hierarchical approach to performance evaluation of expert systems

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  19. MODELING INFORMATION SYSTEM AVAILABILITY BY USING BAYESIAN BELIEF NETWORK APPROACH

    Directory of Open Access Journals (Sweden)

    Semir Ibrahimović

    2016-03-01

    Full Text Available Modern information systems are expected to be always-on by providing services to end-users, regardless of time and location. This is particularly important for organizations and industries where information systems support real-time operations and mission-critical applications that need to be available on 24  7  365 basis. Examples of such entities include process industries, telecommunications, healthcare, energy, banking, electronic commerce and a variety of cloud services. This article presents a modified Bayesian Belief Network model for predicting information system availability, introduced initially by Franke, U. and Johnson, P. (in article “Availability of enterprise IT systems – an expert based Bayesian model”. Software Quality Journal 20(2, 369-394, 2012 based on a thorough review of several dimensions of the information system availability, we proposed a modified set of determinants. The model is parameterized by using probability elicitation process with the participation of experts from the financial sector of Bosnia and Herzegovina. The model validation was performed using Monte Carlo simulation.

  20. [Bayesian approach for the cost-effectiveness evaluation of healthcare technologies].

    Science.gov (United States)

    Berchialla, Paola; Gregori, Dario; Brunello, Franco; Veltri, Andrea; Petrinco, Michele; Pagano, Eva

    2009-01-01

    The development of Bayesian statistical methods for the assessment of the cost-effectiveness of health care technologies is reviewed. Although many studies adopt a frequentist approach, several authors have advocated the use of Bayesian methods in health economics. Emphasis has been placed on the advantages of the Bayesian approach, which include: (i) the ability to make more intuitive and meaningful inferences; (ii) the ability to tackle complex problems, such as allowing for the inclusion of patients who generate no cost, thanks to the availability of powerful computational algorithms; (iii) the importance of a full use of quantitative and structural prior information to produce realistic inferences. Much literature comparing the cost-effectiveness of two treatments is based on the incremental cost-effectiveness ratio. However, new methods are arising with the purpose of decision making. These methods are based on a net benefits approach. In the present context, the cost-effectiveness acceptability curves have been pointed out to be intrinsically Bayesian in their formulation. They plot the probability of a positive net benefit against the threshold cost of a unit increase in efficacy.A case study is presented in order to illustrate the Bayesian statistics in the cost-effectiveness analysis. Emphasis is placed on the cost-effectiveness acceptability curves. Advantages and disadvantages of the method described in this paper have been compared to frequentist methods and discussed.

  1. Hierarchical Swarm Model: A New Approach to Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2010-01-01

    Full Text Available This paper presents a novel optimization model called hierarchical swarm optimization (HSO, which simulates the natural hierarchical complex system from where more complex intelligence can emerge for complex problems solving. This proposed model is intended to suggest ways that the performance of HSO-based algorithms on complex optimization problems can be significantly improved. This performance improvement is obtained by constructing the HSO hierarchies, which means that an agent in a higher level swarm can be composed of swarms of other agents from lower level and different swarms of different levels evolve on different spatiotemporal scale. A novel optimization algorithm (named PS2O, based on the HSO model, is instantiated and tested to illustrate the ideas of HSO model clearly. Experiments were conducted on a set of 17 benchmark optimization problems including both continuous and discrete cases. The results demonstrate remarkable performance of the PS2O algorithm on all chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms.

  2. A Bayesian approach to meta-analysis of plant pathology studies.

    Science.gov (United States)

    Mila, A L; Ngugi, H K

    2011-01-01

    Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework

  3. Air kerma rate estimation by means of in-situ gamma spectrometry: A Bayesian approach

    International Nuclear Information System (INIS)

    Cabal, Gonzalo; Kluson, Jaroslav

    2008-01-01

    Full text: Bayesian inference is used to determine the Air Kerma Rate based on a set of in situ environmental gamma spectra measurements performed with a NaI(Tl) scintillation detector. A natural advantage of such approach is the possibility to quantify uncertainty not only in the Air Kerma Rate estimation but also for the gamma spectra which is unfolded within the procedure. The measurements were performed using a 3'' x 3'' NaI(Tl) scintillation detector. The response matrices of such detection system were calculated using a Monte Carlo code. For the calculations of the spectra as well as the Air Kerma Rate the WinBugs program was used. WinBugs is a dedicated software for Bayesian inference using Monte Carlo Markov chain methods (MCMC). The results of such calculations are shown and compared with other non-Bayesian approachs such as the Scofield-Gold iterative method and the Maximum Entropy Method

  4. Predicting forest insect flight activity: A Bayesian network approach.

    Directory of Open Access Journals (Sweden)

    Stephen M Pawson

    Full Text Available Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model's predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways.

  5. Doing bayesian data analysis a tutorial with R and BUGS

    CERN Document Server

    Kruschke, John K

    2011-01-01

    There is an explosion of interest in Bayesian statistics, primarily because recently created computational methods have finally made Bayesian analysis obtainable to a wide audience. Doing Bayesian Data Analysis, A Tutorial Introduction with R and BUGS provides an accessible approach to Bayesian data analysis, as material is explained clearly with concrete examples. The book begins with the basics, including essential concepts of probability and random sampling, and gradually progresses to advanced hierarchical modeling methods for realistic data. The text delivers comprehensive coverage of all

  6. Bayesian Belief Networks Approach for Modeling Irrigation Behavior

    Science.gov (United States)

    Andriyas, S.; McKee, M.

    2012-12-01

    Canal operators need information to manage water deliveries to irrigators. Short-term irrigation demand forecasts can potentially valuable information for a canal operator who must manage an on-demand system. Such forecasts could be generated by using information about the decision-making processes of irrigators. Bayesian models of irrigation behavior can provide insight into the likely criteria which farmers use to make irrigation decisions. This paper develops a Bayesian belief network (BBN) to learn irrigation decision-making behavior of farmers and utilizes the resulting model to make forecasts of future irrigation decisions based on factor interaction and posterior probabilities. Models for studying irrigation behavior have been rarely explored in the past. The model discussed here was built from a combination of data about biotic, climatic, and edaphic conditions under which observed irrigation decisions were made. The paper includes a case study using data collected from the Canal B region of the Sevier River, near Delta, Utah. Alfalfa, barley and corn are the main crops of the location. The model has been tested with a portion of the data to affirm the model predictive capabilities. Irrigation rules were deduced in the process of learning and verified in the testing phase. It was found that most of the farmers used consistent rules throughout all years and across different types of crops. Soil moisture stress, which indicates the level of water available to the plant in the soil profile, was found to be one of the most significant likely driving forces for irrigation. Irrigations appeared to be triggered by a farmer's perception of soil stress, or by a perception of combined factors such as information about a neighbor irrigating or an apparent preference to irrigate on a weekend. Soil stress resulted in irrigation probabilities of 94.4% for alfalfa. With additional factors like weekend and irrigating when a neighbor irrigates, alfalfa irrigation

  7. A Robust Bayesian Approach for Structural Equation Models with Missing Data

    Science.gov (United States)

    Lee, Sik-Yum; Xia, Ye-Mao

    2008-01-01

    In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear…

  8. A multi-agent systems approach to distributed bayesian information fusion

    NARCIS (Netherlands)

    Pavlin, G.; de Oude, P.; Maris, M.; Nunnink, J.; Hood, T.

    2010-01-01

    This paper introduces design principles for modular Bayesian fusion systems which can (i) cope with large quantities of heterogeneous information and (ii) can adapt to changing constellations of information sources on the fly. The presented approach exploits the locality of relations in causal

  9. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry

    NARCIS (Netherlands)

    Barcaru, A.; Mol, H.G.J.; Tienstra, M.; Vivó-Truyols, G.

    2017-01-01

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number

  10. Predicting Graduation Rates at 4-Year Broad Access Institutions Using a Bayesian Modeling Approach

    Science.gov (United States)

    Crisp, Gloria; Doran, Erin; Salis Reyes, Nicole A.

    2018-01-01

    This study models graduation rates at 4-year broad access institutions (BAIs). We examine the student body, structural-demographic, and financial characteristics that best predict 6-year graduation rates across two time periods (2008-2009 and 2014-2015). A Bayesian model averaging approach is utilized to account for uncertainty in variable…

  11. A non-parametric Bayesian approach to decompounding from high frequency data

    NARCIS (Netherlands)

    Gugushvili, Shota; van der Meulen, F.H.; Spreij, Peter

    2016-01-01

    Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0 of its jump sizes, as well as of its intensity λ0. We take a Bayesian approach to the problem and specify the prior on f0 as the Dirichlet location mixture of normal densities.

  12. Upper limit for Poisson variable incorporating systematic uncertainties by Bayesian approach

    International Nuclear Information System (INIS)

    Zhu, Yongsheng

    2007-01-01

    To calculate the upper limit for the Poisson observable at given confidence level with inclusion of systematic uncertainties in background expectation and signal efficiency, formulations have been established along the line of Bayesian approach. A FORTRAN program, BPULE, has been developed to implement the upper limit calculation

  13. Subject de-biasing of data sets: A Bayesian approach

    International Nuclear Information System (INIS)

    Pate-Cornell, M.E.

    1994-01-01

    In this paper, the authors examine the relevance of data sets (for instance, of past incidents) for risk management decisions when there are reasons to believe that all types of incidents have not been reported at the same rate. Their objective is to infer from the data reports what actually happened in order to assess the potential benefits of different safety measures. The authors use a simple Bayesian model to correct (de-bias) the data sets given the nonreport rates, which are assessed (subjectively) by experts and encoded as the probabilities of reports given different characteristics of the events of interest. They compute a probability distribution for the past number of events given the past number of reports. They illustrate the method by the cases of two data sets: incidents in anesthesia in Australia, and oil spills in the Gulf of Mexico. In the first case, the de-biasing allows correcting for the fact that some types of incidents, such as technical malfunctions, are more likely to be reported when they occur than anesthetist mistakes. In the second case, the authors have to account for the fact that the rates of oil spill reports indifferent incident categories have increased over the years, perhaps at the same time as the rates of incidents themselves

  14. A Bayesian Approach to Period Searching in Solar Coronal Loops

    Energy Technology Data Exchange (ETDEWEB)

    Scherrer, Bryan; McKenzie, David [Montana State University, P.O. Box 173840 Bozeman, MT 59717-3840 (United States)

    2017-03-01

    We have applied a Bayesian generalized Lomb–Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Program 129C: “MHD Wave Heating (Thin Filters),” which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2″ per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1″ per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.

  15. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. (Tennessee Univ., Tullahoma, TN (United States). Space Inst.); Pap, R.M.; Harston, C.T. (Accurate Automation Corp., Chattanooga, TN (United States))

    1989-01-01

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  16. A hierarchical structure approach to MultiSensor Information Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Maren, A.J. [Tennessee Univ., Tullahoma, TN (United States). Space Inst.; Pap, R.M.; Harston, C.T. [Accurate Automation Corp., Chattanooga, TN (United States)

    1989-12-31

    A major problem with image-based MultiSensor Information Fusion (MSIF) is establishing the level of processing at which information should be fused. Current methodologies, whether based on fusion at the pixel, segment/feature, or symbolic levels, are each inadequate for robust MSIF. Pixel-level fusion has problems with coregistration of the images or data. Attempts to fuse information using the features of segmented images or data relies an a presumed similarity between the segmentation characteristics of each image or data stream. Symbolic-level fusion requires too much advance processing to be useful, as we have seen in automatic target recognition tasks. Image-based MSIF systems need to operate in real-time, must perform fusion using a variety of sensor types, and should be effective across a wide range of operating conditions or deployment environments. We address this problem through developing a new representation level which facilitates matching and information fusion. The Hierarchical Scene Structure (HSS) representation, created using a multilayer, cooperative/competitive neural network, meets this need. The MSS is intermediate between a pixel-based representation and a scene interpretation representation, and represents the perceptual organization of an image. Fused HSSs will incorporate information from multiple sensors. Their knowledge-rich structure aids top-down scene interpretation via both model matching and knowledge-based,region interpretation.

  17. A hierarchical bayesian analysis of parasite prevalence and sociocultural outcomes: The role of structural racism and sanitation infrastructure.

    Science.gov (United States)

    Ross, Cody T; Winterhalder, Bruce

    2016-01-01

    We conduct a revaluation of the Thornhill and Fincher research project on parasites using finely-resolved geographic data on parasite prevalence, individual-level sociocultural data, and multilevel Bayesian modeling. In contrast to the evolutionary psychological mechanisms linking parasites to human behavior and cultural characteristics proposed by Thornhill and Fincher, we offer an alternative hypothesis that structural racism and differential access to sanitation systems drive both variation in parasite prevalence and differential behaviors and cultural characteristics. We adopt a Bayesian framework to estimate parasite prevalence rates in 51 districts in eight Latin American countries using the disease status of 170,220 individuals tested for infection with the intestinal roundworm Ascaris lumbricoides (Hürlimann et al., []: PLoS Negl Trop Dis 5:e1404). We then use district-level estimates of parasite prevalence and individual-level social data from 5,558 individuals in the same 51 districts (Latinobarómetro, 2008) to assess claims of causal associations between parasite prevalence and sociocultural characteristics. We find, contrary to Thornhill and Fincher, that parasite prevalence is positively associated with preferences for democracy, negatively associated with preferences for collectivism, and not associated with violent crime rates or gender inequality. A positive association between parasite prevalence and religiosity, as in Fincher and Thornhill (: Behav Brain Sci 35:61-79), and a negative association between parasite prevalence and achieved education, as predicted by Eppig et al. (: Proc R S B: Biol Sci 277:3801-3808), become negative and unreliable when reasonable controls are included in the model. We find support for all predictions derived from our hypothesis linking structural racism to both parasite prevalence and cultural outcomes. We conclude that best practices in biocultural modeling require examining more than one hypothesis, retaining

  18. A Bayesian Network approach for flash flood risk assessment

    Science.gov (United States)

    Boutkhamouine, Brahim; Roux, Hélène; Pérès, François

    2017-04-01

    Climate change is contributing to the increase of natural disasters such as extreme weather events. Sometimes, these events lead to sudden flash floods causing devastating effects on life and property. Most recently, many regions of the French Mediterranean perimeter have endured such catastrophic flood events; Var (October 2015), Ardèche (November 2014), Nîmes (October 2014), Hérault, Gard and Languedoc (September 2014), and Pyrenees mountains (Jun 2013). Altogether, it resulted in dozens of victims and property damages amounting to millions of euros. With this heavy loss in mind, development of hydrological forecasting and warning systems is becoming an essential element in regional and national strategies. Flash flood forecasting but also monitoring is a difficult task because small ungauged catchments ( 10 km2) are often the most destructive ones as for the extreme flash flood event of September 2002 in the Cévennes region (France) (Ruin et al., 2008). The problem of measurement/prediction uncertainty is particularly crucial when attempting to develop operational flash-flood forecasting methods. Taking into account the uncertainty related to the model structure itself, to the model parametrization or to the model forcing (spatio-temporal rainfall, initial conditions) is crucial in hydrological modelling. Quantifying these uncertainties is of primary importance for risk assessment and decision making. Although significant improvements have been made in computational power and distributed hydrologic modelling, the issue dealing with integration of uncertainties into flood forecasting remains up-to-date and challenging. In order to develop a framework which could handle these uncertainties and explain their propagation through the model, we propose to explore the potential of graphical models (GMs) and, more precisely, Bayesian Networks (BNs). These networks are Directed Acyclic Graphs (DAGs) in which knowledge of a certain phenomenon is represented by

  19. A Hierarchical Approach to Persistent Scatterer Network Construction and Deformation Time Series Estimation

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2014-12-01

    Full Text Available This paper presents a hierarchical approach to network construction and time series estimation in persistent scatterer interferometry (PSI for deformation analysis using the time series of high-resolution satellite SAR images. To balance between computational efficiency and solution accuracy, a dividing and conquering algorithm (i.e., two levels of PS networking and solution is proposed for extracting deformation rates of a study area. The algorithm has been tested using 40 high-resolution TerraSAR-X images collected between 2009 and 2010 over Tianjin in China for subsidence analysis, and validated by using the ground-based leveling measurements. The experimental results indicate that the hierarchical approach can remarkably reduce computing time and memory requirements, and the subsidence measurements derived from the hierarchical solution are in good agreement with the leveling data.

  20. A flexible Bayesian hierarchical model of preterm birth risk among US Hispanic subgroups in relation to maternal nativity and education.

    Science.gov (United States)

    Kaufman, Jay S; MacLehose, Richard F; Torrone, Elizabeth A; Savitz, David A

    2011-04-19

    Previous research has documented heterogeneity in the effects of maternal education on adverse birth outcomes by nativity and Hispanic subgroup in the United States. In this article, we considered the risk of preterm birth (PTB) using 9 years of vital statistics birth data from New York City. We employed finer categorizations of exposure than used previously and estimated the risk dose-response across the range of education by nativity and ethnicity. Using Bayesian random effects logistic regression models with restricted quadratic spline terms for years of completed maternal education, we calculated and plotted the estimated posterior probabilities of PTB (gestational age education by ethnic and nativity subgroups adjusted for only maternal age, as well as with more extensive covariate adjustments. We then estimated the posterior risk difference between native and foreign born mothers by ethnicity over the continuous range of education exposures. The risk of PTB varied substantially by education, nativity and ethnicity. Native born groups showed higher absolute risk of PTB and declining risk associated with higher levels of education beyond about 10 years, as did foreign-born Puerto Ricans. For most other foreign born groups, however, risk of PTB was flatter across the education range. For Mexicans, Central Americans, Dominicans, South Americans and "Others", the protective effect of foreign birth diminished progressively across the educational range. Only for Puerto Ricans was there no nativity advantage for the foreign born, although small numbers of foreign born Cubans limited precision of estimates for that group. Using flexible Bayesian regression models with random effects allowed us to estimate absolute risks without strong modeling assumptions. Risk comparisons for any sub-groups at any exposure level were simple to calculate. Shrinkage of posterior estimates through the use of random effects allowed for finer categorization of exposures without

  1. Quantitative Precipitation Estimation over Ocean Using Bayesian Approach from Microwave Observations during the Typhoon Season

    Directory of Open Access Journals (Sweden)

    Jen-Chi Hu

    2009-01-01

    Full Text Available We have developed a new Bayesian approach to retrieve oceanic rain rate from the Tropical Rainfall Measuring Mission (TRMM Microwave Imager (TMI, with an emphasis on typhoon cases in the West Pacific. Retrieved rain rates are validated with measurements of rain gauges located on Japanese islands. To demonstrate improvement, retrievals are also compared with those from the TRMM/Precipitation Radar (PR, the Goddard Profiling Algorithm (GPROF, and a multi-channel linear regression statistical method (MLRS. We have found that qualitatively, all methods retrieved similar horizontal distributions in terms of locations of eyes and rain bands of typhoons. Quantitatively, our new Bayesian retrievals have the best linearity and the smallest root mean square (RMS error against rain gauge data for 16 typhoon over passes in 2004. The correlation coefficient and RMS of our retrievals are 0.95 and ~2 mm hr-1, respectively. In particular, at heavy rain rates, our Bayesian retrievals out perform those retrieved from GPROF and MLRS. Over all, the new Bayesian approach accurately retrieves surface rain rate for typhoon cases. Ac cu rate rain rate estimates from this method can be assimilated in models to improve forecast and prevent potential damages in Taiwan during typhoon seasons.

  2. A Bayesian approach for parameter estimation and prediction using a computationally intensive model

    International Nuclear Information System (INIS)

    Higdon, Dave; McDonnell, Jordan D; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M

    2015-01-01

    Bayesian methods have been successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model η(θ), where θ denotes the uncertain, best input setting. Hence the statistical model is of the form y=η(θ)+ϵ, where ϵ accounts for measurement, and possibly other, error sources. When nonlinearity is present in η(⋅), the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. Although generally applicable, MCMC requires thousands (or even millions) of evaluations of the physics model η(⋅). This requirement is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory. (paper)

  3. Identifying food deserts and swamps based on relative healthy food access: a spatio-temporal Bayesian approach.

    Science.gov (United States)

    Luan, Hui; Law, Jane; Quick, Matthew

    2015-12-30

    Obesity and other adverse health outcomes are influenced by individual- and neighbourhood-scale risk factors, including the food environment. At the small-area scale, past research has analysed spatial patterns of food environments for one time period, overlooking how food environments change over time. Further, past research has infrequently analysed relative healthy food access (RHFA), a measure that is more representative of food purchasing and consumption behaviours than absolute outlet density. This research applies a Bayesian hierarchical model to analyse the spatio-temporal patterns of RHFA in the Region of Waterloo, Canada, from 2011 to 2014 at the small-area level. RHFA is calculated as the proportion of healthy food outlets (healthy outlets/healthy + unhealthy outlets) within 4-km from each small-area. This model measures spatial autocorrelation of RHFA, temporal trend of RHFA for the study region, and spatio-temporal trends of RHFA for small-areas. For the study region, a significant decreasing trend in RHFA is observed (-0.024), suggesting that food swamps have become more prevalent during the study period. For small-areas, significant decreasing temporal trends in RHFA were observed for all small-areas. Specific small-areas located in south Waterloo, north Kitchener, and southeast Cambridge exhibited the steepest decreasing spatio-temporal trends and are classified as spatio-temporal food swamps. This research demonstrates a Bayesian spatio-temporal modelling approach to analyse RHFA at the small-area scale. Results suggest that food swamps are more prevalent than food deserts in the Region of Waterloo. Analysing spatio-temporal trends of RHFA improves understanding of local food environment, highlighting specific small-areas where policies should be targeted to increase RHFA and reduce risk factors of adverse health outcomes such as obesity.

  4. A Bayesian network approach to the database search problem in criminal proceedings

    Science.gov (United States)

    2012-01-01

    Background The ‘database search problem’, that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions

  5. A new method for E-government procurement using collaborative filtering and Bayesian approach.

    Science.gov (United States)

    Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong

    2013-01-01

    Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  6. A Bayesian approach to estimate sensible and latent heat over vegetated land surface

    Directory of Open Access Journals (Sweden)

    C. van der Tol

    2009-06-01

    Full Text Available Sensible and latent heat fluxes are often calculated from bulk transfer equations combined with the energy balance. For spatial estimates of these fluxes, a combination of remotely sensed and standard meteorological data from weather stations is used. The success of this approach depends on the accuracy of the input data and on the accuracy of two variables in particular: aerodynamic and surface conductance. This paper presents a Bayesian approach to improve estimates of sensible and latent heat fluxes by using a priori estimates of aerodynamic and surface conductance alongside remote measurements of surface temperature. The method is validated for time series of half-hourly measurements in a fully grown maize field, a vineyard and a forest. It is shown that the Bayesian approach yields more accurate estimates of sensible and latent heat flux than traditional methods.

  7. A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach

    Directory of Open Access Journals (Sweden)

    Shuai Zhang

    2013-01-01

    Full Text Available Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP to search for the optimal procurement scheme (OPS. Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services’ attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.

  8. Hierarchical brain networks active in approach and avoidance goal pursuit.

    Science.gov (United States)

    Spielberg, Jeffrey M; Heller, Wendy; Miller, Gregory A

    2013-01-01

    Effective approach/avoidance goal pursuit is critical for attaining long-term health and well-being. Research on the neural correlates of key goal-pursuit processes (e.g., motivation) has long been of interest, with lateralization in prefrontal cortex being a particularly fruitful target of investigation. However, this literature has often been limited by a lack of spatial specificity and has not delineated the precise aspects of approach/avoidance motivation involved. Additionally, the relationships among brain regions (i.e., network connectivity) vital to goal-pursuit remain largely unexplored. Specificity in location, process, and network relationship is vital for moving beyond gross characterizations of function and identifying the precise cortical mechanisms involved in motivation. The present paper integrates research using more spatially specific methodologies (e.g., functional magnetic resonance imaging) with the rich psychological literature on approach/avoidance to propose an integrative network model that takes advantage of the strengths of each of these literatures.

  9. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    Science.gov (United States)

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  10. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey

    2014-01-01

    -scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel

  11. A Hierarchical FEM approach for Simulation of Geometrical and Material induced Instability of Composite Structures

    DEFF Research Database (Denmark)

    Hansen, Anders L.; Lund, Erik; Pinho, Silvestre T.

    2009-01-01

    In this paper a hierarchical FE approach is utilized to simulate delamination in a composite plate loaded in uni-axial compression. Progressive delamination is modelled by use of cohesive interface elements that are automatically embedded. The non-linear problem is solved quasi-statically in whic...

  12. The Hierarchical Database Decomposition Approach to Database Concurrency Control.

    Science.gov (United States)

    1984-12-01

    approach, we postulate a model of transaction behavior under two phase locking as shown in Figure 39(a) and a model of that under multiversion ...transaction put in the block queue until it is reactivated. Under multiversion timestamping, however, the request is always granted. Once the request

  13. Design of multimodal transport networks : A hierarchical approach

    NARCIS (Netherlands)

    Van Nes, R.

    2002-01-01

    Multimodal transport, that is using two or more transport modes for a trip between which a transfer is necessary, seems an interesting approach to solving today's transportation problems with respect to the deteriorating accessibility of city centres, recurrent congestion, and environmental impact.

  14. Hierarchical brain networks active in approach and avoidance goal pursuit

    Directory of Open Access Journals (Sweden)

    Jeffrey Martin Spielberg

    2013-06-01

    Full Text Available Effective approach/avoidance goal pursuit is critical for attaining long-term health and well-being. Research on the neural correlates of key goal pursuit processes (e.g., motivation has long been of interest, with lateralization in prefrontal cortex being a particularly fruitful target of investigation. However, this literature has often been limited by a lack of spatial specificity and has not delineated the precise aspects of approach/avoidance motivation involved. Additionally, the relationships among brain regions (i.e., network connectivity vital to goal pursuit remain largely unexplored. Specificity in location, process, and network relationship is vital for moving beyond gross characterizations of function and identifying the precise cortical mechanisms involved in motivation. The present paper integrates research using more spatially specific methodologies (e.g., functional magnetic resonance imaging with the rich psychological literature on approach/avoidance to propose an integrative network model that takes advantage of the strengths of each of these literatures.

  15. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  16. SHORT-TERM SOLAR FLARE LEVEL PREDICTION USING A BAYESIAN NETWORK APPROACH

    International Nuclear Information System (INIS)

    Yu Daren; Huang Xin; Hu Qinghua; Zhou Rui; Wang Huaning; Cui Yanmei

    2010-01-01

    A Bayesian network approach for short-term solar flare level prediction has been proposed based on three sequences of photospheric magnetic field parameters extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. The magnetic measures, the maximum horizontal gradient, the length of neutral line, and the number of singular points do not have determinate relationships with solar flares, so the solar flare level prediction is considered as an uncertainty reasoning process modeled by the Bayesian network. The qualitative network structure which describes conditional independent relationships among magnetic field parameters and the quantitative conditional probability tables which determine the probabilistic values for each variable are learned from the data set. Seven sequential features-the maximum, the mean, the root mean square, the standard deviation, the shape factor, the crest factor, and the pulse factor-are extracted to reduce the dimensions of the raw sequences. Two Bayesian network models are built using raw sequential data (BN R ) and feature extracted data (BN F ), respectively. The explanations of these models are consistent with physical analyses of experts. The performances of the BN R and the BN F appear comparable with other methods. More importantly, the comprehensibility of the Bayesian network models is better than other methods.

  17. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  18. A top-down approach for the prediction of hardness and toughness of hierarchical materials

    International Nuclear Information System (INIS)

    Carpinteri, Alberto; Paggi, Marco

    2009-01-01

    Many natural and man-made materials exhibit structure over more than one length scale. In this paper, we deal with hierarchical grained composite materials that have recently been designed to achieve superior hardness and toughness as compared to their traditional counterparts. Their nested structure, where meso-grains are recursively composed of smaller and smaller micro-grains at the different scales with a fractal-like topology, is herein studied from a hierarchical perspective. Considering a top-down approach, i.e. from the largest to the smallest scale, we propose a recursive micromechanical model coupled with a generalized fractal mixture rule for the prediction of hardness and toughness of a grained material with n hierarchical levels. A relationship between hardness and toughness is also derived and the analytical predictions are compared with experimental data.

  19. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    Science.gov (United States)

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  20. Pedestrian fatality and natural light: Evidence from South Africa using a Bayesian approach

    CSIR Research Space (South Africa)

    Das, Sonali

    2014-02-01

    Full Text Available from South Africa using a Bayesian approach, Econ.Model. (2013), http://dx.doi.org/10.1016/j.econmod.2013.11.037 U N C O R R E C TE D P R O O F 95 countries the most vulnerable group is the economically active sector 96 namely working adults... using a Bayesian approach, Econ.Model. (2013), http://dx.doi.org/10.1016/j.econmod.2013.11.037 U N C O R R E C TE D P R O O F 157 relates the chi-square and Poisson distributions (Johnson and Kotz, 158 1969; Stuart and Ord, 1994) is given by: l ¼ χ2a 2...

  1. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    Science.gov (United States)

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  2. A Bayesian Approach to Multistage Fitting of the Variation of the Skeletal Age Features

    Directory of Open Access Journals (Sweden)

    Dong Hua

    2009-01-01

    Full Text Available Accurate assessment of skeletal maturity is important clinically. Skeletal age assessment is usually based on features encoded in ossification centers. Therefore, it is critical to design a mechanism to capture as much as possible characteristics of features. We have observed that given a feature, there exist stages of the skeletal age such that the variation pattern of the feature differs in these stages. Based on this observation, we propose a Bayesian cut fitting to describe features in response to the skeletal age. With our approach, appropriate positions for stage separation are determined automatically by a Bayesian approach, and a model is used to fit the variation of a feature within each stage. Our experimental results show that the proposed method surpasses the traditional fitting using only one line or one curve not only in the efficiency and accuracy of fitting but also in global and local feature characterization.

  3. Peering through a dirty window: A Bayesian approach to making mine detection decisions from noisy data

    International Nuclear Information System (INIS)

    Kercel, Stephen W.

    1998-01-01

    For several reasons, Bayesian parameter estimation is superior to other methods for extracting features of a weak signal from noise. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since ''nuisance parameters'' can be dropped out of the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit for perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with the imperfections of real-world data, Bayesian approaches this ideal limit of performance more closely than other methods. A major unsolved problem in landmine detection is the fusion of data from multiple sensor types. Bayesian data fusion is only beginning to be explored as a solution to the problem. In single sensor processes Bayesian analysis can sense multiple parameters from the data stream of the one sensor. It does so by computing a joint probability density function of a set of parameter values from the sensor output. However, there is no inherent requirement that the information must come from a single sensor. If multiple sensors are applied to a single process, where several different parameters are implicit in each sensor output data stream, the joint probability density function of all the parameters of interest can be computed in exactly the same manner as the single sensor case. Thus, it is just as practical to base decisions on multiple sensor outputs as it is for single sensors. This should provide a practical way to combine the outputs of dissimilar sensors, such as ground penetrating radar and electromagnetic induction devices, producing a better detection decision than could be provided by either sensor alone

  4. Peering through a dirty window: A Bayesian approach to making mine detection decisions from noisy data

    Energy Technology Data Exchange (ETDEWEB)

    Kercel, Stephen W.

    1998-10-11

    For several reasons, Bayesian parameter estimation is superior to other methods for extracting features of a weak signal from noise. Since it exploits prior knowledge, the analysis begins from a more advantageous starting point than other methods. Also, since ''nuisance parameters'' can be dropped out of the Bayesian analysis, the description of the model need not be as complete as is necessary for such methods as matched filtering. In the limit for perfectly random noise and a perfect description of the model, the signal-to-noise ratio improves as the square root of the number of samples in the data. Even with the imperfections of real-world data, Bayesian approaches this ideal limit of performance more closely than other methods. A major unsolved problem in landmine detection is the fusion of data from multiple sensor types. Bayesian data fusion is only beginning to be explored as a solution to the problem. In single sensor processes Bayesian analysis can sense multiple parameters from the data stream of the one sensor. It does so by computing a joint probability density function of a set of parameter values from the sensor output. However, there is no inherent requirement that the information must come from a single sensor. If multiple sensors are applied to a single process, where several different parameters are implicit in each sensor output data stream, the joint probability density function of all the parameters of interest can be computed in exactly the same manner as the single sensor case. Thus, it is just as practical to base decisions on multiple sensor outputs as it is for single sensors. This should provide a practical way to combine the outputs of dissimilar sensors, such as ground penetrating radar and electromagnetic induction devices, producing a better detection decision than could be provided by either sensor alone.

  5. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  6. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    Science.gov (United States)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  7. A Bayesian approach to estimating variance components within a multivariate generalizability theory framework.

    Science.gov (United States)

    Jiang, Zhehan; Skorupski, William

    2017-12-12

    In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.

  8. A hybrid deterministic-probabilistic approach to model the mechanical response of helically arranged hierarchical strands

    Science.gov (United States)

    Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.

    2017-09-01

    Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.

  9. An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.

    Science.gov (United States)

    Wong, Rowena Syn Yin; Ismail, Noor Azina

    2016-01-01

    There are not many studies that attempt to model intensive care unit (ICU) risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU. This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV) model. Bayesian Markov Chain Monte Carlo (MCMC) simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method. The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS) was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC) values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05) for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study. Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.

  10. An Application of Bayesian Approach in Modeling Risk of Death in an Intensive Care Unit.

    Directory of Open Access Journals (Sweden)

    Rowena Syn Yin Wong

    Full Text Available There are not many studies that attempt to model intensive care unit (ICU risk of death in developing countries, especially in South East Asia. The aim of this study was to propose and describe application of a Bayesian approach in modeling in-ICU deaths in a Malaysian ICU.This was a prospective study in a mixed medical-surgery ICU in a multidisciplinary tertiary referral hospital in Malaysia. Data collection included variables that were defined in Acute Physiology and Chronic Health Evaluation IV (APACHE IV model. Bayesian Markov Chain Monte Carlo (MCMC simulation approach was applied in the development of four multivariate logistic regression predictive models for the ICU, where the main outcome measure was in-ICU mortality risk. The performance of the models were assessed through overall model fit, discrimination and calibration measures. Results from the Bayesian models were also compared against results obtained using frequentist maximum likelihood method.The study involved 1,286 consecutive ICU admissions between January 1, 2009 and June 30, 2010, of which 1,111 met the inclusion criteria. Patients who were admitted to the ICU were generally younger, predominantly male, with low co-morbidity load and mostly under mechanical ventilation. The overall in-ICU mortality rate was 18.5% and the overall mean Acute Physiology Score (APS was 68.5. All four models exhibited good discrimination, with area under receiver operating characteristic curve (AUC values approximately 0.8. Calibration was acceptable (Hosmer-Lemeshow p-values > 0.05 for all models, except for model M3. Model M1 was identified as the model with the best overall performance in this study.Four prediction models were proposed, where the best model was chosen based on its overall performance in this study. This study has also demonstrated the promising potential of the Bayesian MCMC approach as an alternative in the analysis and modeling of in-ICU mortality outcomes.

  11. Location optimization of solar plants by an integrated hierarchical DEA PCA approach

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Maghsoudi, A.

    2008-01-01

    Unique features of renewable energies such as solar energy has caused increasing demands for such resources. In order to use solar energy as a natural resource, environmental circumstances and geographical location related to solar intensity must be considered. Different factors may affect on the selection of a suitable location for solar plants. These factors must be considered concurrently for optimum location identification of solar plants. This article presents an integrated hierarchical approach for location of solar plants by data envelopment analysis (DEA), principal component analysis (PCA) and numerical taxonomy (NT). Furthermore, an integrated hierarchical DEA approach incorporating the most relevant parameters of solar plants is introduced. Moreover, 2 multivariable methods namely, PCA and NT are used to validate the results of DEA model. The prescribed approach is tested for 25 different cities in Iran with 6 different regions within each city. This is the first study that considers an integrated hierarchical DEA approach for geographical location optimization of solar plants. Implementation of the proposed approach would enable the energy policy makers to select the best-possible location for construction of a solar power plant with lowest possible costs

  12. Hierarchical time series bottom-up approach for forecast the export value in Central Java

    Science.gov (United States)

    Mahkya, D. A.; Ulama, B. S.; Suhartono

    2017-10-01

    The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.

  13. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  14. Bayesian hierarchical modelling of continuous non-negative longitudinal data with a spike at zero: An application to a study of birds visiting gardens in winter.

    Science.gov (United States)

    Swallow, Ben; Buckland, Stephen T; King, Ruth; Toms, Mike P

    2016-03-01

    The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero-inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean-variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. © 2015 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  16. Evaluating a Bayesian approach to improve accuracy of individual photographic identification methods using ecological distribution data

    Directory of Open Access Journals (Sweden)

    Richard Stafford

    2011-04-01

    Full Text Available Photographic identification of individual organisms can be possible from natural body markings. Data from photo-ID can be used to estimate important ecological and conservation metrics such as population sizes, home ranges or territories. However, poor quality photographs or less well-studied individuals can result in a non-unique ID, potentially confounding several similar looking individuals. Here we present a Bayesian approach that uses known data about previous sightings of individuals at specific sites as priors to help assess the problems of obtaining a non-unique ID. Using a simulation of individuals with different confidence of correct ID we evaluate the accuracy of Bayesian modified (posterior probabilities. However, in most cases, the accuracy of identification decreases. Although this technique is unsuccessful, it does demonstrate the importance of computer simulations in testing such hypotheses in ecology.

  17. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.

    Science.gov (United States)

    Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano

    2016-08-18

    This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.

  18. Complexity of major UK companies between 2006 and 2010: Hierarchical structure method approach

    Science.gov (United States)

    Ulusoy, Tolga; Keskin, Mustafa; Shirvani, Ayoub; Deviren, Bayram; Kantar, Ersin; Çaǧrı Dönmez, Cem

    2012-11-01

    This study reports on topology of the top 40 UK companies that have been analysed for predictive verification of markets for the period 2006-2010, applying the concept of minimal spanning tree and hierarchical tree (HT) analysis. Construction of the minimal spanning tree (MST) and the hierarchical tree (HT) is confined to a brief description of the methodology and a definition of the correlation function between a pair of companies based on the London Stock Exchange (LSE) index in order to quantify synchronization between the companies. A derivation of hierarchical organization and the construction of minimal-spanning and hierarchical trees for the 2006-2008 and 2008-2010 periods have been used and the results validate the predictive verification of applied semantics. The trees are known as useful tools to perceive and detect the global structure, taxonomy and hierarchy in financial data. From these trees, two different clusters of companies in 2006 were detected. They also show three clusters in 2008 and two between 2008 and 2010, according to their proximity. The clusters match each other as regards their common production activities or their strong interrelationship. The key companies are generally given by major economic activities as expected. This work gives a comparative approach between MST and HT methods from statistical physics and information theory with analysis of financial markets that may give new valuable and useful information of the financial market dynamics.

  19. On the limitations of standard statistical modeling in biological systems: a full Bayesian approach for biology.

    Science.gov (United States)

    Gomez-Ramirez, Jaime; Sanz, Ricardo

    2013-09-01

    One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Banking Crisis Early Warning Model based on a Bayesian Model Averaging Approach

    Directory of Open Access Journals (Sweden)

    Taha Zaghdoudi

    2016-08-01

    Full Text Available The succession of banking crises in which most have resulted in huge economic and financial losses, prompted several authors to study their determinants. These authors constructed early warning models to prevent their occurring. It is in this same vein as our study takes its inspiration. In particular, we have developed a warning model of banking crises based on a Bayesian approach. The results of this approach have allowed us to identify the involvement of the decline in bank profitability, deterioration of the competitiveness of the traditional intermediation, banking concentration and higher real interest rates in triggering bank crisis.

  1. A Bayesian inference approach to unveil supply curves in electricity markets

    DEFF Research Database (Denmark)

    Mitridati, Lesia Marie-Jeanne Mariane; Pinson, Pierre

    2017-01-01

    in the literature on modeling this uncertainty. In this study we introduce a Bayesian inference approach to reveal the aggregate supply curve in a day-ahead electricity market. The proposed algorithm relies on Markov Chain Monte Carlo and Sequential Monte Carlo methods. The major appeal of this approach......With increased competition in wholesale electricity markets, the need for new decision-making tools for strategic producers has arisen. Optimal bidding strategies have traditionally been modeled as stochastic profit maximization problems. However, for producers with non-negligible market power...

  2. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  3. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    OpenAIRE

    Zhaoxuan Li; SM Mahbobur Rahman; Rolando Vega; Bing Dong

    2016-01-01

    We evaluate and compare two common methods, artificial neural networks (ANN) and support vector regression (SVR), for predicting energy productions from a solar photovoltaic (PV) system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statisti...

  4. Energy Efficient Hierarchical Clustering Approaches in Wireless Sensor Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Bilal Jan

    2017-01-01

    Full Text Available Wireless sensor networks (WSN are one of the significant technologies due to their diverse applications such as health care monitoring, smart phones, military, disaster management, and other surveillance systems. Sensor nodes are usually deployed in large number that work independently in unattended harsh environments. Due to constraint resources, typically the scarce battery power, these wireless nodes are grouped into clusters for energy efficient communication. In clustering hierarchical schemes have achieved great interest for minimizing energy consumption. Hierarchical schemes are generally categorized as cluster-based and grid-based approaches. In cluster-based approaches, nodes are grouped into clusters, where a resourceful sensor node is nominated as a cluster head (CH while in grid-based approach the network is divided into confined virtual grids usually performed by the base station. This paper highlights and discusses the design challenges for cluster-based schemes, the important cluster formation parameters, and classification of hierarchical clustering protocols. Moreover, existing cluster-based and grid-based techniques are evaluated by considering certain parameters to help users in selecting appropriate technique. Furthermore, a detailed summary of these protocols is presented with their advantages, disadvantages, and applicability in particular cases.

  5. An efficient Bayesian meta-analysis approach for studying cross-phenotype genetic associations.

    Directory of Open Access Journals (Sweden)

    Arunabha Majumdar

    2018-02-01

    Full Text Available Simultaneous analysis of genetic associations with multiple phenotypes may reveal shared genetic susceptibility across traits (pleiotropy. For a locus exhibiting overall pleiotropy, it is important to identify which specific traits underlie this association. We propose a Bayesian meta-analysis approach (termed CPBayes that uses summary-level data across multiple phenotypes to simultaneously measure the evidence of aggregate-level pleiotropic association and estimate an optimal subset of traits associated with the risk locus. This method uses a unified Bayesian statistical framework based on a spike and slab prior. CPBayes performs a fully Bayesian analysis by employing the Markov Chain Monte Carlo (MCMC technique Gibbs sampling. It takes into account heterogeneity in the size and direction of the genetic effects across traits. It can be applied to both cohort data and separate studies of multiple traits having overlapping or non-overlapping subjects. Simulations show that CPBayes can produce higher accuracy in the selection of associated traits underlying a pleiotropic signal than the subset-based meta-analysis ASSET. We used CPBayes to undertake a genome-wide pleiotropic association study of 22 traits in the large Kaiser GERA cohort and detected six independent pleiotropic loci associated with at least two phenotypes. This includes a locus at chromosomal region 1q24.2 which exhibits an association simultaneously with the risk of five different diseases: Dermatophytosis, Hemorrhoids, Iron Deficiency, Osteoporosis and Peripheral Vascular Disease. We provide an R-package 'CPBayes' implementing the proposed method.

  6. A dynamic Bayesian network based approach to safety decision support in tunnel construction

    International Nuclear Information System (INIS)

    Wu, Xianguo; Liu, Huitao; Zhang, Limao; Skibniewski, Miroslaw J.; Deng, Qianli; Teng, Jiaying

    2015-01-01

    This paper presents a systemic decision approach with step-by-step procedures based on dynamic Bayesian network (DBN), aiming to provide guidelines for dynamic safety analysis of the tunnel-induced road surface damage over time. The proposed DBN-based approach can accurately illustrate the dynamic and updated feature of geological, design and mechanical variables as the construction progress evolves, in order to overcome deficiencies of traditional fault analysis methods. Adopting the predictive, sensitivity and diagnostic analysis techniques in the DBN inference, this approach is able to perform feed-forward, concurrent and back-forward control respectively on a quantitative basis, and provide real-time support before and after an accident. A case study in relating to dynamic safety analysis in the construction of Wuhan Yangtze Metro Tunnel in China is used to verify the feasibility of the proposed approach, as well as its application potential. The relationships between the DBN-based and BN-based approaches are further discussed according to analysis results. The proposed approach can be used as a decision tool to provide support for safety analysis in tunnel construction, and thus increase the likelihood of a successful project in a dynamic project environment. - Highlights: • A dynamic Bayesian network (DBN) based approach for safety decision support is developed. • This approach is able to perform feed-forward, concurrent and back-forward analysis and control. • A case concerning dynamic safety analysis in Wuhan Yangtze Metro Tunnel in China is presented. • DBN-based approach can perform a higher accuracy than traditional static BN-based approach

  7. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  8. Capturing changes in flood risk with Bayesian approaches for flood damage assessment

    Science.gov (United States)

    Vogel, Kristin; Schröter, Kai; Kreibich, Heidi; Thieken, Annegret; Müller, Meike; Sieg, Tobias; Laudan, Jonas; Kienzler, Sarah; Weise, Laura; Merz, Bruno; Scherbaum, Frank

    2016-04-01

    Flood risk is a function of hazard as well as of exposure and vulnerability. All three components are under change over space and time and have to be considered for reliable damage estimations and risk analyses, since this is the basis for an efficient, adaptable risk management. Hitherto, models for estimating flood damage are comparatively simple and cannot sufficiently account for changing conditions. The Bayesian network approach allows for a multivariate modeling of complex systems without relying on expert knowledge about physical constraints. In a Bayesian network each model component is considered to be a random variable. The way of interactions between those variables can be learned from observations or be defined by expert knowledge. Even a combination of both is possible. Moreover, the probabilistic framework captures uncertainties related to the prediction and provides a probability distribution for the damage instead of a point estimate. The graphical representation of Bayesian networks helps to study the change of probabilities for changing circumstances and may thus simplify the communication between scientists and public authorities. In the framework of the DFG-Research Training Group "NatRiskChange" we aim to develop Bayesian networks for flood damage and vulnerability assessments of residential buildings and companies under changing conditions. A Bayesian network learned from data, collected over the last 15 years in flooded regions in the Elbe and Danube catchments (Germany), reveals the impact of many variables like building characteristics, precaution and warning situation on flood damage to residential buildings. While the handling of incomplete and hybrid (discrete mixed with continuous) data are the most challenging issues in the study on residential buildings, a similar study, that focuses on the vulnerability of small to medium sized companies, bears new challenges. Relying on a much smaller data set for the determination of the model

  9. Assessing compositional variability through graphical analysis and Bayesian statistical approaches: case studies on transgenic crops.

    Science.gov (United States)

    Harrigan, George G; Harrison, Jay M

    2012-01-01

    New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.

  10. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  11. Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach

    Science.gov (United States)

    Denolle, M.; Van Houtte, C.

    2017-12-01

    Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.

  12. Delineating the Structure of Normal and Abnormal Personality: An Integrative Hierarchical Approach

    Science.gov (United States)

    Markon, Kristian E.; Krueger, Robert F.; Watson, David

    2008-01-01

    Increasing evidence indicates that normal and abnormal personality can be treated within a single structural framework. However, identification of a single integrated structure of normal and abnormal personality has remained elusive. Here, a constructive replication approach was used to delineate an integrative hierarchical account of the structure of normal and abnormal personality. This hierarchical structure, which integrates many Big Trait models proposed in the literature, replicated across a meta-analysis as well as an empirical study, and across samples of participants as well as measures. The proposed structure resembles previously suggested accounts of personality hierarchy and provides insight into the nature of personality hierarchy more generally. Potential directions for future research on personality and psychopathology are discussed. PMID:15631580

  13. Delineating the structure of normal and abnormal personality: an integrative hierarchical approach.

    Science.gov (United States)

    Markon, Kristian E; Krueger, Robert F; Watson, David

    2005-01-01

    Increasing evidence indicates that normal and abnormal personality can be treated within a single structural framework. However, identification of a single integrated structure of normal and abnormal personality has remained elusive. Here, a constructive replication approach was used to delineate an integrative hierarchical account of the structure of normal and abnormal personality. This hierarchical structure, which integrates many Big Trait models proposed in the literature, replicated across a meta-analysis as well as an empirical study, and across samples of participants as well as measures. The proposed structure resembles previously suggested accounts of personality hierarchy and provides insight into the nature of personality hierarchy more generally. Potential directions for future research on personality and psychopathology are discussed.

  14. A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube

    Science.gov (United States)

    Zou, Shuzhi; Zhao, Li; Hu, Kongfa

    The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.

  15. A Bayesian inverse modeling approach to estimate soil hydraulic properties of a toposequence in southeastern Amazonia.

    Science.gov (United States)

    Stucchi Boschi, Raquel; Qin, Mingming; Gimenez, Daniel; Cooper, Miguel

    2016-04-01

    Modeling is an important tool for better understanding and assessing land use impacts on landscape processes. A key point for environmental modeling is the knowledge of soil hydraulic properties. However, direct determination of soil hydraulic properties is difficult and costly, particularly in vast and remote regions such as one constituting the Amazon Biome. One way to overcome this problem is to extrapolate accurately estimated data to pedologically similar sites. The van Genuchten (VG) parametric equation is the most commonly used for modeling SWRC. The use of a Bayesian approach in combination with the Markov chain Monte Carlo to estimate the VG parameters has several advantages compared to the widely used global optimization techniques. The Bayesian approach provides posterior distributions of parameters that are independent from the initial values and allow for uncertainty analyses. The main objectives of this study were: i) to estimate hydraulic parameters from data of pasture and forest sites by the Bayesian inverse modeling approach; and ii) to investigate the extrapolation of the estimated VG parameters to a nearby toposequence with pedologically similar soils to those used for its estimate. The parameters were estimated from volumetric water content and tension observations obtained after rainfall events during a 207-day period from pasture and forest sites located in the southeastern Amazon region. These data were used to run HYDRUS-1D under a Differential Evolution Adaptive Metropolis (DREAM) scheme 10,000 times, and only the last 2,500 times were used to calculate the posterior distributions of each hydraulic parameter along with 95% confidence intervals (CI) of volumetric water content and tension time series. Then, the posterior distributions were used to generate hydraulic parameters for two nearby toposequences composed by six soil profiles, three are under forest and three are under pasture. The parameters of the nearby site were accepted when

  16. A Bayesian Approach to the Overlap Analysis of Epidemiologically Linked Traits.

    Science.gov (United States)

    Asimit, Jennifer L; Panoutsopoulou, Kalliope; Wheeler, Eleanor; Berndt, Sonja I; Cordell, Heather J; Morris, Andrew P; Zeggini, Eleftheria; Barroso, Inês

    2015-12-01

    Diseases often cooccur in individuals more often than expected by chance, and may be explained by shared underlying genetic etiology. A common approach to genetic overlap analyses is to use summary genome-wide association study data to identify single-nucleotide polymorphisms (SNPs) that are associated with multiple traits at a selected P-value threshold. However, P-values do not account for differences in power, whereas Bayes' factors (BFs) do, and may be approximated using summary statistics. We use simulation studies to compare the power of frequentist and Bayesian approaches with overlap analyses, and to decide on appropriate thresholds for comparison between the two methods. It is empirically illustrated that BFs have the advantage over P-values of a decreasing type I error rate as study size increases for single-disease associations. Consequently, the overlap analysis of traits from different-sized studies encounters issues in fair P-value threshold selection, whereas BFs are adjusted automatically. Extensive simulations show that Bayesian overlap analyses tend to have higher power than those that assess association strength with P-values, particularly in low-power scenarios. Calibration tables between BFs and P-values are provided for a range of sample sizes, as well as an approximation approach for sample sizes that are not in the calibration table. Although P-values are sometimes thought more intuitive, these tables assist in removing the opaqueness of Bayesian thresholds and may also be used in the selection of a BF threshold to meet a certain type I error rate. An application of our methods is used to identify variants associated with both obesity and osteoarthritis. © 2015 The Authors. *Genetic Epidemiology published by Wiley Periodicals, Inc.

  17. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    International Nuclear Information System (INIS)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-01-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  18. [Overcoming the limitations of the descriptive and categorical approaches in psychiatric diagnosis: a proposal based on Bayesian networks].

    Science.gov (United States)

    Sorias, Soli

    2015-01-01

    Efforts to overcome the problems of descriptive and categorical approaches have not yielded results. In the present article, psychiatric diagnosis using Bayesian networks is proposed. Instead of a yes/no decision, Bayesian networks give the probability of diagnostic category inclusion, thereby yielding both a graded, i.e., dimensional diagnosis, and a value of the certainty of the diagnosis. With the use of Bayesian networks in the diagnosis of mental disorders, information about etiology, associated features, treatment outcome, and laboratory results may be used in addition to clinical signs and symptoms, with each of these factors contributing proportionally to their own specificity and sensitivity. Furthermore, a diagnosis (albeit one with a lower probability) can be made even with incomplete, uncertain, or partially erroneous information, and patients whose symptoms are below the diagnostic threshold can be evaluated. Lastly, there is no need of NOS or "unspecified" categories, and comorbid disorders become different dimensions of the diagnostic evaluation. Bayesian diagnoses allow the preservation of current categories and assessment methods, and may be used concurrently with criteria-based diagnoses. Users need not put in extra effort except to collect more comprehensive information. Unlike the Research Domain Criteria (RDoC) project, the Bayesian approach neither increases the diagnostic validity of existing categories nor explains the pathophysiological mechanisms of mental disorders. It, however, can be readily integrated to present classification systems. Therefore, the Bayesian approach may be an intermediate phase between criteria-based diagnosis and the RDoC ideal.

  19. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  20. Validation of Sustainable Development Practices Scale Using the Bayesian Approach to Item Response Theory

    Directory of Open Access Journals (Sweden)

    Martin Hernani Merino

    2014-12-01

    Full Text Available There has been growing recognition of the importance of creating performance measurement tools for the economic, social and environmental management of micro and small enterprise (MSE. In this context, this study aims to validate an instrument to assess perceptions of sustainable development practices by MSEs by means of a Graded Response Model (GRM with a Bayesian approach to Item Response Theory (IRT. The results based on a sample of 506 university students in Peru, suggest that a valid measurement instrument was achieved. At the end of the paper, methodological and managerial contributions are presented.

  1. A population-based Bayesian approach to the minimal model of glucose and insulin homeostasis

    DEFF Research Database (Denmark)

    Andersen, Kim Emil; Højbjerre, Malene

    2005-01-01

    -posed estimation problem, where the reconstruction most often has been done by non-linear least squares techniques separately for each entity. The minmal model was originally specified for a single individual and does not combine several individuals with the advantage of estimating the metabolic portrait...... to a population-based model. The estimation of the parameters are efficiently implemented in a Bayesian approach where posterior inference is made through the use of Markov chain Monte Carlo techniques. Hereby we obtain a powerful and flexible modelling framework for regularizing the ill-posed estimation problem...

  2. BAYESIAN APPROACH TO THE ANALYSIS OF MONETARY POLICY IMPACT ON RUSSIAN MACROECONOMICS INDICATORS

    Directory of Open Access Journals (Sweden)

    Sheveleva O. A.

    2017-12-01

    Full Text Available In this paper the interaction between the production macroeconomic indicators of the Russian economy and MIBOR (the main operational benchmark of the Bank of Russia, as well as the relationship between the inflation indicators and money supply were investigated with Bayesian approach. Conjugate Normal Inverse Wishart Prior was used. According to the study, tough monetary policy has a deterrent effect on the Russian economy. The growth of the money market rate causes a reduction in investments and output in the main sectors of the economy, as well as a drop in the income of the population with an increase in the unemployment rate.

  3. A Bayesian approach to the analysis of quantal bioassay studies using nonparametric mixture models.

    Science.gov (United States)

    Fronczyk, Kassandra; Kottas, Athanasios

    2014-03-01

    We develop a Bayesian nonparametric mixture modeling framework for quantal bioassay settings. The approach is built upon modeling dose-dependent response distributions. We adopt a structured nonparametric prior mixture model, which induces a monotonicity restriction for the dose-response curve. Particular emphasis is placed on the key risk assessment goal of calibration for the dose level that corresponds to a specified response. The proposed methodology yields flexible inference for the dose-response relationship as well as for other inferential objectives, as illustrated with two data sets from the literature. © 2013, The International Biometric Society.

  4. A new approach for supply chain risk management: Mapping SCOR into Bayesian network

    Directory of Open Access Journals (Sweden)

    Mahdi Abolghasemi

    2015-01-01

    Full Text Available Purpose: Increase of costs and complexities in organizations beside the increase of uncertainty and risks have led the managers to use the risk management in order to decrease risk taking and deviation from goals. SCRM has a close relationship with supply chain performance. During the years different methods have been used by researchers in order to manage supply chain risk but most of them are either qualitative or quantitative. Supply chain operation reference (SCOR is a standard model for SCP evaluation which have uncertainty in its metrics. In This paper by combining qualitative and quantitative metrics of SCOR, supply chain performance will be measured by Bayesian Networks. Design/methodology/approach: First qualitative assessment will be done by recognizing uncertain metrics of SCOR model and then by quantifying them, supply chain performance will be measured by Bayesian Networks (BNs and supply chain operations reference (SCOR in which making decision on uncertain variables will be done by predictive and diagnostic capabilities. Findings: After applying the proposed method in one of the biggest automotive companies in Iran, we identified key factors of supply chain performance based on SCOR model through predictive and diagnostic capability of Bayesian Networks. After sensitivity analysis, we find out that ‘Total cost’ and its criteria that include costs of labors, warranty, transportation and inventory have the widest range and most effect on supply chain performance. So, managers should take their importance into account for decision making. We can make decisions simply by running model in different situations. Research limitations/implications: A more precise model consisted of numerous factors but it is difficult and sometimes impossible to solve big models, if we insert all of them in a Bayesian model. We have adopted real world characteristics with our software and method abilities. On the other hand, fewer data exist for some

  5. Life cycle reliability assessment of new products—A Bayesian model updating approach

    International Nuclear Information System (INIS)

    Peng, Weiwen; Huang, Hong-Zhong; Li, Yanfeng; Zuo, Ming J.; Xie, Min

    2013-01-01

    The rapidly increasing pace and continuously evolving reliability requirements of new products have made life cycle reliability assessment of new products an imperative yet difficult work. While much work has been done to separately estimate reliability of new products in specific stages, a gap exists in carrying out life cycle reliability assessment throughout all life cycle stages. We present a Bayesian model updating approach (BMUA) for life cycle reliability assessment of new products. Novel features of this approach are the development of Bayesian information toolkits by separately including “reliability improvement factor” and “information fusion factor”, which allow the integration of subjective information in a specific life cycle stage and the transition of integrated information between adjacent life cycle stages. They lead to the unique characteristics of the BMUA in which information generated throughout life cycle stages are integrated coherently. To illustrate the approach, an application to the life cycle reliability assessment of a newly developed Gantry Machining Center is shown

  6. Bayesian approach to MSD-based analysis of particle motion in live cells.

    Science.gov (United States)

    Monnier, Nilah; Guo, Syuan-Ming; Mori, Masashi; He, Jun; Lénárt, Péter; Bathe, Mark

    2012-08-08

    Quantitative tracking of particle motion using live-cell imaging is a powerful approach to understanding the mechanism of transport of biological molecules, organelles, and cells. However, inferring complex stochastic motion models from single-particle trajectories in an objective manner is nontrivial due to noise from sampling limitations and biological heterogeneity. Here, we present a systematic Bayesian approach to multiple-hypothesis testing of a general set of competing motion models based on particle mean-square displacements that automatically classifies particle motion, properly accounting for sampling limitations and correlated noise while appropriately penalizing model complexity according to Occam's Razor to avoid over-fitting. We test the procedure rigorously using simulated trajectories for which the underlying physical process is known, demonstrating that it chooses the simplest physical model that explains the observed data. Further, we show that computed model probabilities provide a reliability test for the downstream biological interpretation of associated parameter values. We subsequently illustrate the broad utility of the approach by applying it to disparate biological systems including experimental particle trajectories from chromosomes, kinetochores, and membrane receptors undergoing a variety of complex motions. This automated and objective Bayesian framework easily scales to large numbers of particle trajectories, making it ideal for classifying the complex motion of large numbers of single molecules and cells from high-throughput screens, as well as single-cell-, tissue-, and organism-level studies. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. GO-Bayes: Gene Ontology-based overrepresentation analysis using a Bayesian approach.

    Science.gov (United States)

    Zhang, Song; Cao, Jing; Kong, Y Megan; Scheuermann, Richard H

    2010-04-01

    A typical approach for the interpretation of high-throughput experiments, such as gene expression microarrays, is to produce groups of genes based on certain criteria (e.g. genes that are differentially expressed). To gain more mechanistic insights into the underlying biology, overrepresentation analysis (ORA) is often conducted to investigate whether gene sets associated with particular biological functions, for example, as represented by Gene Ontology (GO) annotations, are statistically overrepresented in the identified gene groups. However, the standard ORA, which is based on the hypergeometric test, analyzes each GO term in isolation and does not take into account the dependence structure of the GO-term hierarchy. We have developed a Bayesian approach (GO-Bayes) to measure overrepresentation of GO terms that incorporates the GO dependence structure by taking into account evidence not only from individual GO terms, but also from their related terms (i.e. parents, children, siblings, etc.). The Bayesian framework borrows information across related GO terms to strengthen the detection of overrepresentation signals. As a result, this method tends to identify sets of closely related GO terms rather than individual isolated GO terms. The advantage of the GO-Bayes approach is demonstrated with a simulation study and an application example.

  8. Treatment strategies for coronary in-stent restenosis: systematic review and hierarchical Bayesian network meta-analysis of 24 randomised trials and 4880 patients

    Science.gov (United States)

    Giacoppo, Daniele; Gargiulo, Giuseppe; Aruta, Patrizia; Capranzano, Piera; Tamburino, Corrado

    2015-01-01

    Study question What is the most safe and effective interventional treatment for coronary in-stent restenosis? Methods In a hierarchical Bayesian network meta-analysis, PubMed, Embase, Scopus, Cochrane Library, Web of Science, ScienceDirect, and major scientific websites were screened up to 10 August 2015. Randomised controlled trials of patients with any type of coronary in-stent restenosis (either of bare metal stents or drug eluting stents; and either first or recurrent instances) were included. Trials including multiple treatments at the same time in the same group or comparing variants of the same intervention were excluded. Primary endpoints were target lesion revascularisation and late lumen loss, both at six to 12 months. The main analysis was complemented by network subanalyses, standard pairwise comparisons, and subgroup and sensitivity analyses. Study answer and limitations Twenty four trials (4880 patients), including seven interventional treatments, were identified. Compared with plain balloons, bare metal stents, brachytherapy, rotational atherectomy, and cutting balloons, drug coated balloons and drug eluting stents were associated with a reduced risk of target lesion revascularisation and major adverse cardiac events, and with reduced late lumen loss. Treatment ranking indicated that drug eluting stents had the highest probability (61.4%) of being the most effective for target lesion vascularisation; drug coated balloons were similarly indicated as the most effective treatment for late lumen loss (probability 70.3%). The comparative efficacy of drug coated balloons and drug eluting stents was similar for target lesion revascularisation (summary odds ratio 1.10, 95% credible interval 0.59 to 2.01) and late lumen loss reduction (mean difference in minimum lumen diameter 0.04 mm, 95% credible interval −0.20 to 0.10). Risks of death, myocardial infarction, and stent thrombosis were comparable across all treatments, but these analyses were limited by a

  9. Global, regional, and subregional trends in unintended pregnancy and its outcomes from 1990 to 2014: estimates from a Bayesian hierarchical model

    Directory of Open Access Journals (Sweden)

    Jonathan Bearak, PhD

    2018-04-01

    Full Text Available Summary: Background: Estimates of pregnancy incidence by intention status and outcome indicate how effectively women and couples are able to fulfil their childbearing aspirations, and can be used to monitor the impact of family-planning programmes. We estimate global, regional, and subregional pregnancy rates by intention status and outcome for 1990–2014. Methods: We developed a Bayesian hierarchical time series model whereby the unintended pregnancy rate is a function of the distribution of women across subgroups defined by marital status and contraceptive need and use, and of the risk of unintended pregnancy in each subgroup. Data included numbers of births and of women estimated by the UN Population Division, recently published abortion incidence estimates, and findings from surveys of women on the percentage of births or pregnancies that were unintended. Some 298 datapoints on the intention status of births or pregnancies were obtained for 105 countries. Findings: Worldwide, an estimated 44% (90% uncertainty interval [UI] 42–48 of pregnancies were unintended in 2010–14. The unintended pregnancy rate declined by 30% (90% UI 21–39 in developed regions, from 64 (59–81 per 1000 women aged 15–44 years in 1990–94 to 45 (42–56 in 2010–14. In developing regions, the unintended pregnancy rate fell 16% (90% UI 5–24, from 77 (74–88 per 1000 women aged 15–44 years to 65 (62–76. Whereas the decline in the unintended pregnancy rate in developed regions coincided with a declining abortion rate, the decline in developing regions coincided with a declining unintended birth rate. In 2010–14, 59% (90% UI 54–65 of unintended pregnancies ended in abortion in developed regions, as did 55% (52–60 of unintended pregnancies in developing regions. Interpretation: The unintended pregnancy rate remains substantially higher in developing regions than in developed regions. Sexual and reproductive health services are needed to help women

  10. Uncertainty estimation of a complex water quality model: The influence of Box-Cox transformation on Bayesian approaches and comparison with a non-Bayesian method

    Science.gov (United States)

    Freni, Gabriele; Mannina, Giorgio

    In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the

  11. Empirical Bayesian inference and model uncertainty

    International Nuclear Information System (INIS)

    Poern, K.

    1994-01-01

    This paper presents a hierarchical or multistage empirical Bayesian approach for the estimation of uncertainty concerning the intensity of a homogeneous Poisson process. A class of contaminated gamma distributions is considered to describe the uncertainty concerning the intensity. These distributions in turn are defined through a set of secondary parameters, the knowledge of which is also described and updated via Bayes formula. This two-stage Bayesian approach is an example where the modeling uncertainty is treated in a comprehensive way. Each contaminated gamma distributions, represented by a point in the 3D space of secondary parameters, can be considered as a specific model of the uncertainty about the Poisson intensity. Then, by the empirical Bayesian method each individual model is assigned a posterior probability

  12. Hierarchical Agent-Based Integrated Modelling Approach for Microgrids with Adoption of EVs and HRES

    Directory of Open Access Journals (Sweden)

    Peng Han

    2014-01-01

    Full Text Available The large adoption of electric vehicles (EVs, hybrid renewable energy systems (HRESs, and the increasing of the loads shall bring significant challenges to the microgrid. The methodology to model microgrid with high EVs and HRESs penetrations is the key to EVs adoption assessment and optimized HRESs deployment. However, considering the complex interactions of the microgrid containing massive EVs and HRESs, any previous single modelling approaches are insufficient. Therefore in this paper, the methodology named Hierarchical Agent-based Integrated Modelling Approach (HAIMA is proposed. With the effective integration of the agent-based modelling with other advanced modelling approaches, the proposed approach theoretically contributes to a new microgrid model hierarchically constituted by microgrid management layer, component layer, and event layer. Then the HAIMA further links the key parameters and interconnects them to achieve the interactions of the whole model. Furthermore, HAIMA practically contributes to a comprehensive microgrid operation system, through which the assessment of the proposed model and the impact of the EVs adoption are achieved. Simulations show that the proposed HAIMA methodology will be beneficial for the microgrid study and EV’s operation assessment and shall be further utilized for the energy management, electricity consumption prediction, the EV scheduling control, and HRES deployment optimization.

  13. Construction of a Hierarchical Architecture of Covalent Organic Frameworks via a Postsynthetic Approach.

    Science.gov (United States)

    Zhang, Gen; Tsujimoto, Masahiko; Packwood, Daniel; Duong, Nghia Tuan; Nishiyama, Yusuke; Kadota, Kentaro; Kitagawa, Susumu; Horike, Satoshi

    2018-02-21

    Covalent organic frameworks (COFs) represent an emerging class of crystalline porous materials that are constructed by the assembly of organic building blocks linked via covalent bonds. Several strategies have been developed for the construction of new COF structures; however, a facile approach to fabricate hierarchical COF architectures with controlled domain structures remains a significant challenge, and has not yet been achieved. In this study, a dynamic covalent chemistry (DCC)-based postsynthetic approach was employed at the solid-liquid interface to construct such structures. Two-dimensional imine-bonded COFs having different aromatic groups were prepared, and a homogeneously mixed-linker structure and a heterogeneously core-shell hollow structure were fabricated by controlling the reactivity of the postsynthetic reactions. Solid-state nuclear magnetic resonance (NMR) spectroscopy and transmission electron microscopy (TEM) confirmed the structures. COFs prepared by a postsynthetic approach exhibit several functional advantages compared with their parent phases. Their Brunauer-Emmett-Teller (BET) surface areas are 2-fold greater than those of their parent phases because of the higher crystallinity. In addition, the hydrophilicity of the material and the stepwise adsorption isotherms of H 2 O vapor in the hierarchical frameworks were precisely controlled, which was feasible because of the distribution of various domains of the two COFs by controlling the postsynthetic reaction. The approach opens new routes for constructing COF architectures with functionalities that are not possible in a single phase.

  14. A Dynamic BI–Orthogonal Field Equation Approach to Efficient Bayesian Inversion

    Directory of Open Access Journals (Sweden)

    Tagade Piyush M.

    2017-06-01

    Full Text Available This paper proposes a novel computationally efficient stochastic spectral projection based approach to Bayesian inversion of a computer simulator with high dimensional parametric and model structure uncertainty. The proposed method is based on the decomposition of the solution into its mean and a random field using a generic Karhunen-Loève expansion. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spatial dimensions that are spectrally represented using respective orthogonal bases. In particular, the present paper investigates generalized polynomial chaos bases for the stochastic dimension and eigenfunction bases for the spatial dimension. Dynamic orthogonality is used to derive closed-form equations for the time evolution of mean, spatial and the stochastic fields. The resultant system of equations consists of a partial differential equation (PDE that defines the dynamic evolution of the mean, a set of PDEs to define the time evolution of eigenfunction bases, while a set of ordinary differential equations (ODEs define dynamics of the stochastic field. This system of dynamic evolution equations efficiently propagates the prior parametric uncertainty to the system response. The resulting bi-orthogonal expansion of the system response is used to reformulate the Bayesian inference for efficient exploration of the posterior distribution. The efficacy of the proposed method is investigated for calibration of a 2D transient diffusion simulator with an uncertain source location and diffusivity. The computational efficiency of the method is demonstrated against a Monte Carlo method and a generalized polynomial chaos approach.

  15. A robust and hierarchical approach for the automatic co-registration of intensity and visible images

    Science.gov (United States)

    González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José

    2012-09-01

    This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.

  16. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    Science.gov (United States)

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  17. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  18. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.X.; Wang, X.; Gao, Y.W., E-mail: ywgao@lzu.edu.cn; Zhou, Y.H.

    2013-11-15

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper.

  19. Modeling for mechanical response of CICC by hierarchical approach and ABAQUS simulation

    International Nuclear Information System (INIS)

    Li, Y.X.; Wang, X.; Gao, Y.W.; Zhou, Y.H.

    2013-01-01

    Highlights: • We develop an analytical model based on the hierarchical approach of classical wire rope theory. • The numerical model is set up through ABAQUS to verify and enhance the theoretical model. • We calculate two concerned mechanical response: global displacement–load curve and local axial strain distribution. • Elastic–plasticity is the main character in loading curve, and the friction between adjacent strands plays a significant role in the distribution map. -- Abstract: An unexpected degradation frequently occurs in superconducting cable (CICC) due to the mechanical response (deformation) when suffering from electromagnetic load and thermal load during operation. Because of the cable's hierarchical twisted configuration, it is difficult to quantitatively model the mechanical response. In addition, the local mechanical characteristics such as strain distribution could be hardly monitored via experimental method. To address this issue, we develop an analytical model based on the hierarchical approach of classical wire rope theory. This approach follows the algorithm advancing successively from n + 1 stage (e.g. 3 × 3 × 5 subcable) to n stage (e.g. 3 × 3 subcable). There are no complicated numerical procedures required in this model. Meanwhile, the numerical model is set up through ABAQUS to verify and enhance the theoretical model. Subsequently, we calculate two concerned mechanical responses: global displacement–load curve and local axial strain distribution. We find that in the global displacement–load curve, the elastic–plasticity is the main character, and the higher-level cable shows enhanced nonlinear characteristics. As for the local distribution, the friction among adjacent strands plays a significant role in this map. The magnitude of friction strongly influences the regularity of the distribution at different twisted stages. More detailed results are presented in this paper

  20. Naturalizing sense of agency with a hierarchical event-control approach.

    Directory of Open Access Journals (Sweden)

    Devpriya Kumar

    Full Text Available Unraveling the mechanisms underlying self and agency has been a difficult scientific problem. We argue for an event-control approach for naturalizing the sense of agency by focusing on the role of perception-action regularities present at different hierarchical levels and contributing to the sense of self as an agent. The amount of control at different levels of the control hierarchy determines the sense of agency. The current study investigates this approach in a set of two experiments using a scenario containing multiple agents sharing a common goal where one of the agents is partially controlled by the participant. The participant competed with other agents for achieving the goal and subsequently answered questions on identification (which agent was controlled by the participant, the degree to which they are confident about their identification (sense of identification and the degree to which the participant believed he/she had control over his/her actions (sense of authorship. Results indicate a hierarchical relationship between goal-level control (higher level and perceptual-motor control (lower level for sense of agency. Sense of identification ratings increased with perceptual-motor control when the goal was not completed but did not vary with perceptual-motor control when the goal was completed. Sense of authorship showed a similar interaction effect only in experiment 2 that had only one competing agent unlike the larger number of competing agents in experiment 1. The effect of hierarchical control can also be seen in the misidentification pattern and misidentification was greater with the agent affording greater control. Results from the two studies support the event-control approach in understanding sense of agency as grounded in control. The study also offers a novel paradigm for empirically studying sense of agency and self.

  1. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  2. Précis of bayesian rationality: The probabilistic approach to human reasoning.

    Science.gov (United States)

    Oaksford, Mike; Chater, Nick

    2009-02-01

    According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic--the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. Bayesian Rationality argues that rationality is defined instead by the ability to reason about uncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning. In Chapters 1-4 of Bayesian Rationality (Oaksford & Chater 2007), the case is made that cognition in general, and human everyday reasoning in particular, is best viewed as solving probabilistic, rather than logical, inference problems. In Chapters 5-7 the psychology of "deductive" reasoning is tackled head-on: It is argued that purportedly "logical" reasoning problems, revealing apparently irrational behaviour, are better understood from a probabilistic point of view. Data from conditional reasoning, Wason's selection task, and syllogistic inference are captured by recasting these problems probabilistically. The probabilistic approach makes a variety of novel predictions which have been experimentally confirmed. The book considers the implications of this work, and the wider "probabilistic turn" in cognitive science and artificial intelligence, for understanding human rationality.

  3. Incorporating historical information in biosimilar trials: Challenges and a hybrid Bayesian-frequentist approach.

    Science.gov (United States)

    Mielke, Johanna; Schmidli, Heinz; Jones, Byron

    2018-05-01

    For the approval of biosimilars, it is, in most cases, necessary to conduct large Phase III clinical trials in patients to convince the regulatory authorities that the product is comparable in terms of efficacy and safety to the originator product. As the originator product has already been studied in several trials beforehand, it seems natural to include this historical information into the showing of equivalent efficacy. Since all studies for the regulatory approval of biosimilars are confirmatory studies, it is required that the statistical approach has reasonable frequentist properties, most importantly, that the Type I error rate is controlled-at least in all scenarios that are realistic in practice. However, it is well known that the incorporation of historical information can lead to an inflation of the Type I error rate in the case of a conflict between the distribution of the historical data and the distribution of the trial data. We illustrate this issue and confirm, using the Bayesian robustified meta-analytic-predictive (MAP) approach as an example, that simultaneously controlling the Type I error rate over the complete parameter space and gaining power in comparison to a standard frequentist approach that only considers the data in the new study, is not possible. We propose a hybrid Bayesian-frequentist approach for binary endpoints that controls the Type I error rate in the neighborhood of the center of the prior distribution, while improving the power. We study the properties of this approach in an extensive simulation study and provide a real-world example. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    Science.gov (United States)

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  5. A Bayesian belief network approach for assessing uncertainty in conceptual site models at contaminated sites

    DEFF Research Database (Denmark)

    Thomsen, Nanna Isbak; Binning, Philip John; McKnight, Ursula S.

    2016-01-01

    the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found...... to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models...... that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert...

  6. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    Science.gov (United States)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  7. A novel critical infrastructure resilience assessment approach using dynamic Bayesian networks

    Science.gov (United States)

    Cai, Baoping; Xie, Min; Liu, Yonghong; Liu, Yiliu; Ji, Renjie; Feng, Qiang

    2017-10-01

    The word resilience originally originates from the Latin word "resiliere", which means to "bounce back". The concept has been used in various fields, such as ecology, economics, psychology, and society, with different definitions. In the field of critical infrastructure, although some resilience metrics are proposed, they are totally different from each other, which are determined by the performances of the objects of evaluation. Here we bridge the gap by developing a universal critical infrastructure resilience metric from the perspective of reliability engineering. A dynamic Bayesian networks-based assessment approach is proposed to calculate the resilience value. A series, parallel and voting system is used to demonstrate the application of the developed resilience metric and assessment approach.

  8. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  9. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    %, and about 6% if the bounds of acceptance were set at ± 15%. This Bayesian approach can help to reduce the number of samples required to calculate GFR using Bröchner-Mortensen formula with good accuracy.

  10. Integrated survival analysis using an event-time approach in a Bayesian framework.

    Science.gov (United States)

    Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M

    2015-02-01

    Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.

  11. Integrated survival analysis using an event-time approach in a Bayesian framework

    Science.gov (United States)

    Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.

    2015-01-01

    Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.

  12. A unifying probabilistic Bayesian approach to derive electron density from MRI for radiation therapy treatment planning

    International Nuclear Information System (INIS)

    Gudur, Madhu Sudhan Reddy; Hara, Wendy; Le, Quynh-Thu; Wang, Lei; Xing, Lei; Li, Ruijiang

    2014-01-01

    MRI significantly improves the accuracy and reliability of target delineation in radiation therapy for certain tumors due to its superior soft tissue contrast compared to CT. A treatment planning process with MRI as the sole imaging modality will eliminate systematic CT/MRI co-registration errors, reduce cost and radiation exposure, and simplify clinical workflow. However, MRI lacks the key electron density information necessary for accurate dose calculation and generating reference images for patient setup. The purpose of this work is to develop a unifying method to derive electron density from standard T1-weighted MRI. We propose to combine both intensity and geometry information into a unifying probabilistic Bayesian framework for electron density mapping. For each voxel, we compute two conditional probability density functions (PDFs) of electron density given its: (1) T1-weighted MRI intensity, and (2) geometry in a reference anatomy, obtained by deformable image registration between the MRI of the atlas and test patient. The two conditional PDFs containing intensity and geometry information are combined into a unifying posterior PDF, whose mean value corresponds to the optimal electron density value under the mean-square error criterion. We evaluated the algorithm’s accuracy of electron density mapping and its ability to detect bone in the head for eight patients, using an additional patient as the atlas or template. Mean absolute HU error between the estimated and true CT, as well as receiver operating characteristics for bone detection (HU > 200) were calculated. The performance was compared with a global intensity approach based on T1 and no density correction (set whole head to water). The proposed technique significantly reduced the errors in electron density estimation, with a mean absolute HU error of 126, compared with 139 for deformable registration (p = 2  ×  10 −4 ), 283 for the intensity approach (p = 2  ×  10 −6 ) and 282

  13. Exploring the Influence of Neighborhood Characteristics on Burglary Risks: A Bayesian Random Effects Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2016-06-01

    Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.

  14. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    International Nuclear Information System (INIS)

    Van Woesik, R

    2013-01-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change. (letter)

  15. Estimation of under-reported visceral Leishmaniasis (Vl cases in Bihar: a Bayesian approach

    Directory of Open Access Journals (Sweden)

    A Ranjan

    2013-12-01

    Full Text Available Background: Visceral leishmaniasis (VL is a major health problem in the state of Bihar and adjoining areas in India. In absence of any active surveillance mechanism for the disease, there seems to be gross under-reporting of VL cases. Objective: The objective of this study was to estimate extent of under-reporting of VL cases in Bihar using pooled analysis of published papers. Method: We calculated the pooled common ratio (RRMH based on three studies and combined it with a prior distribution of ratio using inverse-variance weighting method. Bayesian method was used to estimate the posterior distribution of the “under-reporting factor” (ratio of unreported to reported cases. Results: The posterior distribution of ratio of unreported to reported cases yielded a mean of 3.558, with 95% posterior limits of 2.81 and 4.50. Conclusion: Bayesian approach gives evidence to the fact that the total number of VL cases in the state may be nearly more than three times that of currently reported figures. 

  16. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    Science.gov (United States)

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  17. Assessment of successful smoking cessation by psychological factors using the Bayesian network approach.

    Science.gov (United States)

    Yang, Xiaorong; Li, Suyun; Pan, Lulu; Wang, Qiang; Li, Huijie; Han, Mingkui; Zhang, Nan; Jiang, Fan; Jia, Chongqi

    2016-07-01

    The association between psychological factors and smoking cessation is complicated and inconsistent in published researches, and the joint effect of psychological factors on smoking cessation is unclear. This study explored how psychological factors jointly affect the success of smoking cessation using a Bayesian network approach. A community-based case control study was designed with 642 adult male successful smoking quitters as the cases, and 700 adult male failed smoking quitters as the controls. General self-efficacy (GSE), trait coping style (positive-trait coping style (PTCS) and negative-trait coping style (NTCS)) and self-rating anxiety (SA) were evaluated by GSE Scale, Trait Coping Style Questionnaire and SA Scale, respectively. Bayesian network was applied to evaluate the relationship between psychological factors and successful smoking cessation. The local conditional probability table of smoking cessation indicated that different joint conditions of psychological factors led to different outcomes for smoking cessation. Among smokers with high PTCS, high NTCS and low SA, only 36.40% successfully quitted smoking. However, among smokers with low pack-years of smoking, high GSE, high PTCS and high SA, 63.64% successfully quitted smoking. Our study indicates psychological factors jointly influence smoking cessation outcome. According to different joint situations, different solutions should be developed to control tobacco in practical intervention.

  18. Dynamic probability evaluation of safety levels of earth-rockfill dams using Bayesian approach

    Directory of Open Access Journals (Sweden)

    Zi-wu Fan

    2009-06-01

    Full Text Available In order to accurately predict and control the aging process of dams, new information should be collected continuously to renew the quantitative evaluation of dam safety levels. Owing to the complex structural characteristics of dams, it is quite difficult to predict the time-varying factors affecting their safety levels. It is not feasible to employ dynamic reliability indices to evaluate the actual safety levels of dams. Based on the relevant regulations for dam safety classification in China, a dynamic probability description of dam safety levels was developed. Using the Bayesian approach and effective information mining, as well as real-time information, this study achieved more rational evaluation and prediction of dam safety levels. With the Bayesian expression of discrete stochastic variables, the a priori probabilities of the dam safety levels determined by experts were combined with the likelihood probability of the real-time check information, and the probability information for the evaluation of dam safety levels was renewed. The probability index was then applied to dam rehabilitation decision-making. This method helps reduce the difficulty and uncertainty of the evaluation of dam safety levels and complies with the current safe decision-making regulations for dams in China. It also enhances the application of current risk analysis methods for dam safety levels.

  19. Quantifying uncertainty and resilience on coral reefs using a Bayesian approach

    Science.gov (United States)

    van Woesik, R.

    2013-12-01

    Coral reefs are rapidly deteriorating globally. The contemporary management option favors managing for resilience to provide reefs with the capacity to tolerate human-induced disturbances. Yet resilience is most commonly defined as the capacity of a system to absorb disturbances without changing fundamental processes or functionality. Quantifying no change, or the uncertainty of a null hypothesis, is nonsensical using frequentist statistics, but is achievable using a Bayesian approach. This study outlines a practical Bayesian framework that quantifies the resilience of coral reefs using two inter-related models. The first model examines the functionality of coral reefs in the context of their reef-building capacity, whereas the second model examines the recovery rates of coral cover after disturbances. Quantifying intrinsic rates of increase in coral cover and habitat-specific, steady-state equilibria are useful proxies of resilience. A reduction in the intrinsic rate of increase following a disturbance, or the slowing of recovery over time, can be useful indicators of stress; a change in the steady-state equilibrium suggests a phase shift. Quantifying the uncertainty of key reef-building processes and recovery parameters, and comparing these parameters against benchmarks, facilitates the detection of loss of resilience and provides signals of imminent change.

  20. Integrating health economics modeling in the product development cycle of medical devices: a Bayesian approach.

    Science.gov (United States)

    Vallejo-Torres, Laura; Steuten, Lotte M G; Buxton, Martin J; Girling, Alan J; Lilford, Richard J; Young, Terry

    2008-01-01

    Medical device companies are under growing pressure to provide health-economic evaluations of their products. Cost-effectiveness analyses are commonly undertaken as a one-off exercise at the late stage of development of new technologies; however, the benefits of an iterative use of economic evaluation during the development process of new products have been acknowledged in the literature. Furthermore, the use of Bayesian methods within health technology assessment has been shown to be of particular value in the dynamic framework of technology appraisal when new information becomes available in the life cycle of technologies. In this study, we set out a methodology to adapt these methods for their application to directly support investment decisions in a commercial setting from early stages of the development of new medical devices. Starting with relatively simple analysis from the very early development phase and proceeding to greater depth of analysis at later stages, a Bayesian approach facilitates the incorporation of all available evidence and would help companies to make better informed choices at each decision point.

  1. Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach

    Science.gov (United States)

    Mohebbi, Mohammadreza; Wolfe, Rory; Forbes, Andrew

    2014-01-01

    This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. PMID:24413702

  2. A risk management process for reinforced concrete structures by coupling modelling, monitoring and Bayesian approaches

    International Nuclear Information System (INIS)

    Capra, Bruno; Li, Kefei; Wolff, Valentin; Bernard, Olivier; Gerard, Bruno

    2004-01-01

    The impact of steel corrosion on the durability of reinforced concrete structures has since a long time been a major concern in civil engineering. The main electrochemical mechanisms of the steel corrosion are know well known. The material and structure degradation is attributed to the progressive formation of an expansive corrosion product at the steel-concrete interface. To assess quantitatively the structure lifetime, a two-stage service life model has been accepted widely. So far, the research attention is mainly given to the corrosion in an un-cracked concrete. However. practically one is often confronted to the reinforcement corrosion in an already cracked concrete. How to quantify the corrosion risk is of great interest for the long term durability of these cracked structures. To this end, this paper proposes a service life modeling for the corrosion process by carbonation in a cracked or un-cracked concrete depending on the observation or monitoring data available. Some recent experimental investigations are used to calibrate the models. Then, the models are applied to a shell structure to quantify the corrosion process and determine the optimal maintenance strategy. As corrosion processes are very difficult to model and subjected to material and environmental random variations, an example of structure reassessment is presented taking into account in situ information by the mean of Bayesian approaches. The coupling of monitoring, modelling and updating leads to a new global maintenance strategy of infrastructure. In conclusion: This paper presents an unified methodology coupling predictive models, observations and Bayesian approaches in order to assess the degradation degree of an ageing structure. The particular case of corrosion is treated on an innovative way by the development of a service life model taking into account cracking effects on the kinetics of the phenomena. At a material level, the dominant factors are the crack opening and the crack nature

  3. Assessing offshore emergency evacuation behavior in a virtual environment using a Bayesian Network approach

    International Nuclear Information System (INIS)

    Musharraf, Mashrura; Smith, Jennifer; Khan, Faisal; Veitch, Brian; MacKinnon, Scott

    2016-01-01

    In the performance influencing factor (PIF) hierarchy, person-based influencing factors reside in the top level along with machine-based, team-based, organization-based and situation/stressor-based factors. Though person-based PIFs like morale, motivation, and attitude (MMA) play an important role in shaping performance, it is nearly impossible to assess such PIFs directly. However, it is possible to measure behavioral indicators (e.g. compliance, use of information) that can provide insight regarding the state of the unobservable person-based PIFs. One common approach to measuring these indicators is to carry out a self-reported questionnaire survey. Significant work has been done to make such questionnaires reliable, but the potential validity problem associated with any questionnaire is that the data are subjective and thus may bear a limited relationship to reality. This paper describes the use of a virtual environment to measure behavioral indicators, which in turn can be used as proxies to assess otherwise unobservable PIFs like MMA. A Bayesian Network (BN) model is first developed to define the relationship between person-based PIFs and measurable behavioral indicators. The paper then shows how these indicators can be measured using evidence collected from a virtual environment of an offshore petroleum installation. A study that focused on emergency evacuation scenarios was done with 36 participants. The participants were first assessed using a multiple choice test. They were then assessed based on their observed performance during simulated offshore emergency evacuation conditions. A comparison of the two assessments demonstrates the potential benefits and challenges of using virtual environments to assess behavioral indicators, and thus the person-based PIFs. - Highlights: • New approach to use virtual environment as measure of behavioral indicators. • New model to study morale, motivation, and attitude. • Bayesian Network model to define the

  4. Refining mass formulas for astrophysical applications: A Bayesian neural network approach

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2017-10-01

    Background: Exotic nuclei, particularly those near the drip lines, are at the core of one of the fundamental questions driving nuclear structure and astrophysics today: What are the limits of nuclear binding? Exotic nuclei play a critical role in both informing theoretical models as well as in our understanding of the origin of the heavy elements. Purpose: Our aim is to refine existing mass models through the training of an artificial neural network that will mitigate the large model discrepancies far away from stability. Methods: The basic paradigm of our two-pronged approach is an existing mass model that captures as much as possible of the underlying physics followed by the implementation of a Bayesian neural network (BNN) refinement to account for the missing physics. Bayesian inference is employed to determine the parameters of the neural network so that model predictions may be accompanied by theoretical uncertainties. Results: Despite the undeniable quality of the mass models adopted in this work, we observe a significant improvement (of about 40%) after the BNN refinement is implemented. Indeed, in the specific case of the Duflo-Zuker mass formula, we find that the rms deviation relative to experiment is reduced from σrms=0.503 MeV to σrms=0.286 MeV. These newly refined mass tables are used to map the neutron drip lines (or rather "drip bands") and to study a few critical r -process nuclei. Conclusions: The BNN approach is highly successful in refining the predictions of existing mass models. In particular, the large discrepancy displayed by the original "bare" models in regions where experimental data are unavailable is considerably quenched after the BNN refinement. This lends credence to our approach and has motivated us to publish refined mass tables that we trust will be helpful for future astrophysical applications.

  5. Bayesian benefits with JASP

    NARCIS (Netherlands)

    Marsman, M.; Wagenmakers, E.-J.

    2017-01-01

    We illustrate the Bayesian approach to data analysis using the newly developed statistical software program JASP. With JASP, researchers are able to take advantage of the benefits that the Bayesian framework has to offer in terms of parameter estimation and hypothesis testing. The Bayesian

  6. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  7. A hierarchical approach to ecological assessment of contaminated soils at Aberdeen Proving Ground, USA

    Energy Technology Data Exchange (ETDEWEB)

    Kuperman, R.G.

    1995-12-31

    Despite the expansion of environmental toxicology studies over the past decade, soil ecosystems have largely been ignored in ecotoxicological studies in the United States. The objective of this project was to develop and test the efficacy of a comprehensive methodology for assessing ecological impacts of soil contamination. A hierarchical approach that integrates biotic parameters and ecosystem processes was used to give insight into the mechanisms that lead to alterations in the structure and function of soil ecosystems in contaminated areas. This approach involved (1) a thorough survey of the soil biota to determine community structure, (2) laboratory and field tests on critical ecosystem processes, (3) toxicity trials, and (4) the use of spatial analyses to provide input to the decision-making, process. This methodology appears to, offer an efficient and potentially cost-saving tool for remedial investigations of contaminated sites.

  8. A meta-analysis accounting for sources of variability to estimate heat resistance reference parameters of bacteria using hierarchical Bayesian modeling: Estimation of D at 121.1 °C and pH 7, zT and zpH of Geobacillus stearothermophilus.

    Science.gov (United States)

    Rigaux, Clémence; Denis, Jean-Baptiste; Albert, Isabelle; Carlin, Frédéric

    2013-02-01

    Predicting microbial survival requires reference parameters for each micro-organism of concern. When data are abundant and publicly available, a meta-analysis is a useful approach for assessment of these parameters, which can be performed with hierarchical Bayesian modeling. Geobacillus stearothermophilus is a major agent of microbial spoilage of canned foods and is therefore a persistent problem in the food industry. The thermal inactivation parameters of G. stearothermophilus (D(ref), i.e.the decimal reduction time D at the reference temperature 121.1°C and pH 7.0, z(T) and z(pH)) were estimated from a large set of 430 D values mainly collected from scientific literature. Between-study variability hypotheses on the inactivation parameters D(ref), z(T) and z(pH) were explored, using three different hierarchical Bayesian models. Parameter estimations were made using Bayesian inference and the models were compared with a graphical and a Bayesian criterion. Results show the necessity to account for random effects associated with between-study variability. Assuming variability on D(ref), z(T) and z(pH), the resulting distributions for D(ref), z(T) and z(pH) led to a mean of 3.3 min for D(ref) (95% Credible Interval CI=[0.8; 9.6]), to a mean of 9.1°C for z(T) (CI=[5.4; 13.1]) and to a mean of 4.3 pH units for z(pH) (CI=[2.9; 6.3]), in the range pH 3 to pH 7.5. Results are also given separating variability and uncertainty in these distributions, as well as adjusted parametric distributions to facilitate further use of these results in aqueous canned foods such as canned vegetables. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Risk prediction model for knee pain in the Nottingham community: a Bayesian modelling approach.

    Science.gov (United States)

    Fernandes, G S; Bhattacharya, A; McWilliams, D F; Ingham, S L; Doherty, M; Zhang, W

    2017-03-20

    Twenty-five percent of the British population over the age of 50 years experiences knee pain. Knee pain can limit physical ability and cause distress and bears significant socioeconomic costs. The objectives of this study were to develop and validate the first risk prediction model for incident knee pain in the Nottingham community and validate this internally within the Nottingham cohort and externally within the Osteoarthritis Initiative (OAI) cohort. A total of 1822 participants from the Nottingham community who were at risk for knee pain were followed for 12 years. Of this cohort, two-thirds (n = 1203) were used to develop the risk prediction model, and one-third (n = 619) were used to validate the model. Incident knee pain was defined as pain on most days for at least 1 month in the past 12 months. Predictors were age, sex, body mass index, pain elsewhere, prior knee injury and knee alignment. A Bayesian logistic regression model was used to determine the probability of an OR >1. The Hosmer-Lemeshow χ 2 statistic (HLS) was used for calibration, and ROC curve analysis was used for discrimination. The OAI cohort from the United States was also used to examine the performance of the model. A risk prediction model for knee pain incidence was developed using a Bayesian approach. The model had good calibration, with an HLS of 7.17 (p = 0.52) and moderate discriminative ability (ROC 0.70) in the community. Individual scenarios are given using the model. However, the model had poor calibration (HLS 5866.28, p prediction model for knee pain, regardless of underlying structural changes of knee osteoarthritis, in the community using a Bayesian modelling approach. The model appears to work well in a community-based population but not in individuals with a higher risk for knee osteoarthritis, and it may provide a convenient tool for use in primary care to predict the risk of knee pain in the general population.

  10. An efficient multiple particle filter based on the variational Bayesian approach

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2015-12-07

    This paper addresses the filtering problem in large-dimensional systems, in which conventional particle filters (PFs) remain computationally prohibitive owing to the large number of particles needed to obtain reasonable performances. To overcome this drawback, a class of multiple particle filters (MPFs) has been recently introduced in which the state-space is split into low-dimensional subspaces, and then a separate PF is applied to each subspace. In this paper, we adopt the variational Bayesian (VB) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs to be generated. In a numerical test, the proposed VBMPF behaves better than the PF and MPF.

  11. A Bayesian approach to quantify the contribution of animal-food sources to human salmonellosis

    DEFF Research Database (Denmark)

    Hald, Tine; Vose, D.; Wegener, Henrik Caspar

    2004-01-01

    Based on the data from the integrated Danish Salmonella surveillance in 1999, we developed a mathematical model for quantifying the contribution of each of the major animal-food sources to human salmonellosis. The model was set up to calculate the number of domestic and sporadic cases caused...... salmonellosis was also included. The joint posterior distribution was estimated by fitting the model to the reported number of domestic and sporadic cases per Salmonella type in a Bayesian framework using Markov Chain Monte Carlo simulation. The number of domestic and sporadic cases was obtained by subtracting.......8-10.4%) of the cases, respectively. Taken together, imported foods were estimated to account for 11.8% (95% CI: 5.0-19.0%) of the cases. Other food sources considered had only a minor impact, whereas 25% of the cases could not be associated with any source. This approach of quantifying the contribution of the various...

  12. Analysing the length of care episode after hip fracture: a nonparametric and a parametric Bayesian approach.

    Science.gov (United States)

    Riihimäki, Jaakko; Sund, Reijo; Vehtari, Aki

    2010-06-01

    Effective utilisation of limited resources is a challenge for health care providers. Accurate and relevant information extracted from the length of stay distributions is useful for management purposes. Patient care episodes can be reconstructed from the comprehensive health registers, and in this paper we develop a Bayesian approach to analyse the length of care episode after a fractured hip. We model the large scale data with a flexible nonparametric multilayer perceptron network and with a parametric Weibull mixture model. To assess the performances of the models, we estimate expected utilities using predictive density as a utility measure. Since the model parameters cannot be directly compared, we focus on observables, and estimate the relevances of patient explanatory variables in predicting the length of stay. To demonstrate how the use of the nonparametric flexible model is advantageous for this complex health care data, we also study joint effects of variables in predictions, and visualise nonlinearities and interactions found in the data.

  13. A Bayesian approach to assess data from radionuclide activity analyses in environmental samples

    International Nuclear Information System (INIS)

    Barrera, Manuel; Lourdes Romero, M.; Nunez-Lagos, Rafael; Bernardo, Jose M.

    2007-01-01

    A Bayesian statistical approach is introduced to assess experimental data from the analyses of radionuclide activity concentration in environmental samples (low activities). A theoretical model has been developed that allows the use of known prior information about the value of the measurand (activity), together with the experimental value determined through the measurement. The model has been applied to data of the Inter-laboratory Proficiency Test organised periodically among Spanish environmental radioactivity laboratories that are producing the radiochemical results for the Spanish radioactive monitoring network. A global improvement of laboratories performance is produced when this prior information is taken into account. The prior information used in this methodology is an interval within which the activity is known to be contained, but it could be extended to any other experimental quantity with a different type of prior information available

  14. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Le Riche R.

    2010-06-01

    Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their

  15. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Science.gov (United States)

    Gogu, C.; Yin, W.; Haftka, R.; Ifju, P.; Molimard, J.; Le Riche, R.; Vautrin, A.

    2010-06-01

    A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test) which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel) and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD) of the full fields in order to drastically reduce their dimensionality. POD is

  16. A Bayesian approach for predicting risk of autonomous underwater vehicle loss during their missions

    International Nuclear Information System (INIS)

    Brito, Mario; Griffiths, Gwyn

    2016-01-01

    Autonomous Underwater Vehicles (AUVs) are effective platforms for science research and monitoring, and for military and commercial data-gathering purposes. However, there is an inevitable risk of loss during any mission. Quantifying the risk of loss is complex, due to the combination of vehicle reliability and environmental factors, and cannot be determined through analytical means alone. An alternative approach – formal expert judgment – is a time-consuming process; consequently a method is needed to broaden the applicability of judgments beyond the narrow confines of an elicitation for a defined environment. We propose and explore a solution founded on a Bayesian Belief Network (BBN), where the results of the expert judgment elicitation are taken as the initial prior probability of loss due to failure. The network topology captures the causal effects of the environment separately on the vehicle and on the support platform, and combines these to produce an updated probability of loss due to failure. An extended version of the Kaplan–Meier estimator is then used to update the mission risk profile with travelled distance. Sensitivity analysis of the BBN is presented and a case study of Autosub3 AUV deployment in the Amundsen Sea is discussed in detail. - Highlights: • Novel method to estimate risk of autonomous vehicle loss in uncertain environments. • A framework to integrate frequentist and subjective probability modelling. • A Bayesian belief updating method for capturing variation in operating environment. • Graphical approach for sensitivity analysis, applicable to any BBN model validation. • Pragmatic case studies showing the application of the proposed framework.

  17. Faculty Development for Fostering Clinical Reasoning Skills in Early Medical Students Using a Modified Bayesian Approach.

    Science.gov (United States)

    Addy, Tracie Marcella; Hafler, Janet; Galerneau, France

    2016-01-01

    Clinical reasoning is a necessary skill for medical students to acquire in the course of their education, and there is evidence that they can start this process at the undergraduate level. However, physician educators who are experts in their given fields may have difficulty conveying their complex thought processes to students. Providing faculty development that equips educators with tools to teach clinical reasoning may support skill development in early medical students. We provided faculty development on a modified Bayesian method of teaching clinical reasoning to clinician educators who facilitated small-group, case-based workshops with 2nd-year medical students. We interviewed them before and after the module regarding their perceptions on teaching clinical reasoning. We solicited feedback from the students about the effectiveness of the method in developing their clinical reasoning skills. We carried out this project during an institutional curriculum rebuild where clinical reasoning was a defined goal. At the time of the intervention, there was also increased involvement of the Teaching and Learning Center in elevating the status of teaching and learning. There was high overall satisfaction with the faculty development program. Both the faculty and the students described the modified Bayesian approach as effective in fostering the development of clinical reasoning skills. Through this work, we learned how to form a beneficial partnership between a clinician educator and Teaching and Learning Center to promote faculty development on a clinical reasoning teaching method for early medical students. We uncovered challenges faced by both faculty and early learners in this study. We observed that our faculty chose to utilize the method of teaching clinical reasoning in a variety of manners in the classroom. Despite obstacles and differing approaches utilized, we believe that this model can be emulated at other institutions to foster the development of clinical

  18. A Bayesian network approach for modeling local failure in lung cancer

    International Nuclear Information System (INIS)

    Oh, Jung Hun; Craft, Jeffrey; Al Lozi, Rawan; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O; Bradley, Jeffrey D; El Naqa, Issam

    2011-01-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  19. Evaluating Vegetation Potential for Wildfire Impacted Watershed Using a Bayesian Network Modeling Approach

    Science.gov (United States)

    Jaramillo, L. V.; Stone, M. C.; Morrison, R. R.

    2017-12-01

    Decision-making for natural resource management is complex especially for fire impacted watersheds in the Southwestern US because of the vital importance of water resources, exorbitant cost of fire management and restoration, and the risks of the wildland-urban interface (WUI). While riparian and terrestrial vegetation are extremely important to ecosystem health and provide ecosystem services, loss of vegetation due to wildfire, post-fire flooding, and debris flows can lead to further degradation of the watershed and increased vulnerability to erosion and debris flow. Land managers are charged with taking measures to mitigate degradation of the watershed effectively and efficiently with limited time, money, and data. For our study, a Bayesian network (BN) approach is implemented to understand vegetation potential for Kashe-Katuwe Tent Rocks National Monument in the fire-impacted Peralta Canyon Watershed, New Mexico, USA. We implement both two-dimensional hydrodynamic and Bayesian network modeling to incorporate spatial variability in the system. Our coupled modeling framework presents vegetation recruitment and succession potential for three representative plant types (native riparian, native terrestrial, and non-native) under several hydrologic scenarios and management actions. In our BN model, we use variables that address timing, hydrologic, and groundwater conditions as well as recruitment and succession constraints for the plant types based on expert knowledge and literature. Our approach allows us to utilize small and incomplete data, incorporate expert knowledge, and explicitly account for uncertainty in the system. Our findings can be used to help land managers and local decision-makers determine their plan of action to increase watershed health and resilience.

  20. Inference of reactive transport model parameters using a Bayesian multivariate approach

    Science.gov (United States)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  1. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  2. A hierarchical clustering scheme approach to assessment of IP-network traffic using detrended fluctuation analysis

    Science.gov (United States)

    Takuma, Takehisa; Masugi, Masao

    2009-03-01

    This paper presents an approach to the assessment of IP-network traffic in terms of the time variation of self-similarity. To get a comprehensive view in analyzing the degree of long-range dependence (LRD) of IP-network traffic, we use a hierarchical clustering scheme, which provides a way to classify high-dimensional data with a tree-like structure. Also, in the LRD-based analysis, we employ detrended fluctuation analysis (DFA), which is applicable to the analysis of long-range power-law correlations or LRD in non-stationary time-series signals. Based on sequential measurements of IP-network traffic at two locations, this paper derives corresponding values for the LRD-related parameter α that reflects the degree of LRD of measured data. In performing the hierarchical clustering scheme, we use three parameters: the α value, average throughput, and the proportion of network traffic that exceeds 80% of network bandwidth for each measured data set. We visually confirm that the traffic data can be classified in accordance with the network traffic properties, resulting in that the combined depiction of the LRD and other factors can give us an effective assessment of network conditions at different times.

  3. A study of finite mixture model: Bayesian approach on financial time series data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  4. A Bayesian Approach to Excess Volatility, Short-term Underreaction and Long-term Overreaction during Financial Crises

    NARCIS (Netherlands)

    X. Guo (Xu); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung); L. Zhu (Lixing)

    2016-01-01

    textabstractIn this paper, we introduce a new Bayesian approach to explain some market anomalies during financial crises and subsequent recovery. We assume that the earnings shock of an asset follows a random walk model with and without drift to incorporate the impact of financial crises. We further

  5. Psychological Needs, Engagement, and Work Intentions: A Bayesian Multi-Measurement Mediation Approach and Implications for HRD

    Science.gov (United States)

    Shuck, Brad; Zigarmi, Drea; Owen, Jesse

    2015-01-01

    Purpose: The purpose of this study was to empirically examine the utility of self-determination theory (SDT) within the engagement-performance linkage. Design/methodology/approach: Bayesian multi-measurement mediation modeling was used to estimate the relation between SDT, engagement and a proxy measure of performance (e.g. work intentions) (N =…

  6. A Bayesian Network approach to the evaluation of building design and its consequences for employee performance and operational costs

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge; Toftum, Jørn; Friis-Hansen, Peter

    2009-01-01

    A Bayesian Network approach has been developed that can compare different building designs by estimating the effects of the thermal indoor environment on the mental performance of office workers. A part of this network is based on the compilation of subjective thermal sensation data and the assoc...

  7. A Semiparametric Bayesian Approach for Analyzing Longitudinal Data from Multiple Related Groups.

    Science.gov (United States)

    Das, Kiranmoy; Afriyie, Prince; Spirko, Lauren

    2015-11-01

    Often the biological and/or clinical experiments result in longitudinal data from multiple related groups. The analysis of such data is quite challenging due to the fact that groups might have shared information on the mean and/or covariance functions. In this article, we consider a Bayesian semiparametric approach of modeling the mean trajectories for longitudinal response coming from multiple related groups. We consider matrix stick-breaking process priors on the group mean parameters which allows information sharing on the mean trajectories across the groups. Simulation studies are performed to demonstrate the effectiveness of the proposed approach compared to the more traditional approaches. We analyze data from a one-year follow-up of nutrition education for hypercholesterolemic children with three different treatments where the children are from different age-groups. Our analysis provides more clinically useful information than the previous analysis of the same dataset. The proposed approach will be a very powerful tool for analyzing data from clinical trials and other medical experiments.

  8. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    Directory of Open Access Journals (Sweden)

    Michael Jae-Yoon Chung

    Full Text Available A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i learn probabilistic models of actions through self-discovery and experience, (ii utilize these learned models for inferring the goals of human actions, and (iii perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i a simulated robot that learns human-like gaze following behavior, and (ii a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  9. Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

    KAUST Repository

    Al-Rabah, Abdullatif R.

    2013-05-01

    Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly

  10. A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets

    Science.gov (United States)

    JafarGandomi, Arash; Binley, Andrew

    2013-09-01

    We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is

  11. Quantification of uncertainty in flood risk assessment for flood protection planning: a Bayesian approach

    Science.gov (United States)

    Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel

    2017-04-01

    Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.

  12. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A facile approach to fabricate hierarchically structured poly(3-hexylthiophene-2,5-diyl) films

    DEFF Research Database (Denmark)

    Zhang, Weihua; Zong, Chuanyong; Xie, Jixun

    2017-01-01

    Microstructured surfaces have great potentials to improve the performances and efficiency of optoelectronic devices. In this work, a simple robust approach based on surface instabilities was presented to fabricate poly(3-hexylthiophene-2,5-diyl) (P3HT) films with ridge-like/wrinkled composite...... microstructures. Namely, the hierarchically patterned films were prepared by spin coating the P3HT/tetrahydrofuran (THF) solution on a polydimethylsiloxane (PDMS) substrate to form stable ridge-like structures, followed by solvent vapor swelling to create surface wrinkles with the orientation guided by the ridge......-like structures. During spin coating of the P3HT/THF solution, the ridge-like structures were generated by the in-situ template of the THF swelling-induced creasing structures on the PDMS substrate. To our knowledge, it is the first report that the creasing structures are used as a recoverable template...

  14. Minimax terminal approach problem in two-level hierarchical nonlinear discrete-time dynamical system

    Energy Technology Data Exchange (ETDEWEB)

    Shorikov, A. F., E-mail: afshorikov@mail.ru [Ural Federal University, 19 S. Mira, Ekaterinburg, 620002, Russia Institute of Mathematics and Mechanics, Ural Branch of Russian Academy of Sciences, 16 S. Kovalevskaya, Ekaterinburg, 620990 (Russian Federation)

    2015-11-30

    We consider a discrete–time dynamical system consisting of three controllable objects. The motions of all objects are given by the corresponding vector nonlinear or linear discrete–time recurrent vector relations, and control system for its has two levels: basic (first or I level) that is dominating and subordinate level (second or II level) and both have different criterions of functioning and united a priori by determined informational and control connections defined in advance. For the dynamical system in question, we propose a mathematical formalization in the form of solving a multistep problem of two-level hierarchical minimax program control over the terminal approach process with incomplete information and give a general scheme for its solving.

  15. A Hierarchical Approach Using Machine Learning Methods in Solar Photovoltaic Energy Production Forecasting

    Directory of Open Access Journals (Sweden)

    Zhaoxuan Li

    2016-01-01

    Full Text Available We evaluate and compare two common methods, artificial neural networks (ANN and support vector regression (SVR, for predicting energy productions from a solar photovoltaic (PV system in Florida 15 min, 1 h and 24 h ahead of time. A hierarchical approach is proposed based on the machine learning algorithms tested. The production data used in this work corresponds to 15 min averaged power measurements collected from 2014. The accuracy of the model is determined using computing error statistics such as mean bias error (MBE, mean absolute error (MAE, root mean square error (RMSE, relative MBE (rMBE, mean percentage error (MPE and relative RMSE (rRMSE. This work provides findings on how forecasts from individual inverters will improve the total solar power generation forecast of the PV system.

  16. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    Science.gov (United States)

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  17. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  18. A Bayesian approach to PET reconstruction using image-modeling Gibbs priors: Implementation and comparison

    International Nuclear Information System (INIS)

    Chan, M.T.; Herman, G.T.; Levitan, E.

    1996-01-01

    We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach

  19. A Bayesian approach to landscape ecological risk assessment applied to the upper Grande Ronde watershed, Oregon

    Science.gov (United States)

    Kimberley K. Ayre; Wayne G. Landis

    2012-01-01

    We present a Bayesian network model based on the ecological risk assessment framework to evaluate potential impacts to habitats and resources resulting from wildfire, grazing, forest management activities, and insect outbreaks in a forested landscape in northeastern Oregon. The Bayesian network structure consisted of three tiers of nodes: landscape disturbances,...

  20. A bayesian approach for learning and tracking switching, non-stationary opponents

    CSIR Research Space (South Africa)

    Hernandez-Leal, P

    2016-02-01

    Full Text Available of interactions. We propose using a Bayesian framework to address this problem. Bayesian policy reuse (BPR) has been empirically shown to be efficient at correctly detecting the best policy to use from a library in sequential decision tasks. In this paper we...

  1. A Bayesian Belief Network Approach to Predict Damages Caused by Disturbance Agents

    Directory of Open Access Journals (Sweden)

    Alfred Radl

    2017-12-01

    Full Text Available In mountain forests of Central Europe, storm and snow breakage as well as bark beetles are the prevailing major disturbances. The complex interrelatedness between climate, disturbance agents, and forest management increases the need for an integrative approach explicitly addressing the multiple interactions between environmental changes, forest management, and disturbance agents to support forest resource managers in adaptive management. Empirical data with a comprehensive coverage for modelling the susceptibility of forests and the impact of disturbance agents are rare, thus making probabilistic models, based on expert knowledge, one of the few modelling approaches that are able to handle uncertainties due to the available information. Bayesian belief networks (BBNs are a kind of probabilistic graphical model that has become very popular to practitioners and scientists mainly due to considerations of risk and uncertainties. In this contribution, we present a development methodology to define and parameterize BBNs based on expert elicitation and approximation. We modelled storm and bark beetle disturbances agents, analyzed effects of the development methodology on model structure, and evaluated behavior with stand data from Norway spruce (Picea abies (L. Karst. forests in southern Austria. The high vulnerability of the case study area according to different disturbance agents makes it particularly suitable for testing the BBN model.

  2. Using Bayesian network and AHP method as a marketing approach tools in defining tourists’ preferences

    Directory of Open Access Journals (Sweden)

    Nataša Papić-Blagojević

    2012-04-01

    Full Text Available Marketing approach is associated to market conditions and achieving long term profitability of a company by satisfying consumers’ needs. This approach in tourism does not have to be related only to promoting one touristic destination, but is associated to relation between travel agency and its clients too. It considers that travel agencies adjust their offers to their clients’ needs. In that sense, it is important to analyze the behavior of tourists in the earlier periods with consideration of their preferences. Using Bayesian network, it could be graphically displayed the connection between tourists who have similar taste and relationships between them. On the other hand, the analytic hierarchy process (AHP is used to rank tourist attractions, with also relying on past experience. In this paper we examine possible applications of these two models in tourism in Serbia. The example is hypothetical, but it will serve as a base for future research. Three types of tourism are chosen as a representative in Vojvodina: Cultural, Rural and Business tourism, because they are the bright spot of touristic development in this area. Applied on these forms, analytic hierarchy process has shown its strength in predicting tourists’ preferences.

  3. Applications of Bayesian approach in modelling risk of malaria-related hospital mortality

    Directory of Open Access Journals (Sweden)

    Simbeye Jupiter S

    2008-02-01

    Full Text Available Abstract Background Malaria is a major public health problem in Malawi, however, quantifying its burden in a population is a challenge. Routine hospital data provide a proxy for measuring the incidence of severe malaria and for crudely estimating morbidity rates. Using such data, this paper proposes a method to describe trends, patterns and factors associated with in-hospital mortality attributed to the disease. Methods We develop semiparametric regression models which allow joint analysis of nonlinear effects of calendar time and continuous covariates, spatially structured variation, unstructured heterogeneity, and other fixed covariates. Modelling and inference use the fully Bayesian approach via Markov Chain Monte Carlo (MCMC simulation techniques. The methodology is applied to analyse data arising from paediatric wards in Zomba district, Malawi, between 2002 and 2003. Results and Conclusion We observe that the risk of dying in hospital is lower in the dry season, and for children who travel a distance of less than 5 kms to the hospital, but increases for those who are referred to the hospital. The results also indicate significant differences in both structured and unstructured spatial effects, and the health facility effects reveal considerable differences by type of facility or practice. More importantly, our approach shows non-linearities in the effect of metrical covariates on the probability of dying in hospital. The study emphasizes that the methodological framework used provides a useful tool for analysing the data at hand and of similar structure.

  4. A Bayesian reliability approach to the performance assessment of a geological waste repository

    International Nuclear Information System (INIS)

    Flueck, J.A.; Singh, A.K.

    1992-01-01

    This paper discusses the task of selecting a suitable site for a high-level waste disposal repository (HLWR) which certainly is a complex one in that one must address both engineering and economic factors of the proposed facility and site as well as environmental, public health, safety, and sociopolitical factors. Acknowledging the complexity of the siting problem for a HLWR leads one to readily conclude that a formal analysis, including the use of a performance assessment model (PAM), is needed to assist the designated decision makers in their task of selecting a suitable site. The overall goal of a PAM is to aid the decision makers in making the best possible technical siting decision. For a number of reason, the authors believe that the combining of both Bayesian decision theory and reliability methodology provides the best approach to constructing a useful PAM for assisting in the siting of a HLWR. This combination allows one to formally integrate existing relevant information, professional judgement, and component model outputs to produce conditionally estimated probabilities for a decision tree approach to the radionuclide release problem of a proposed HLWR. If loss functions are available, this also allows one to calculate the expected costs or losses from possible radionuclide releases. This latter calculation may be very important in selecting the final site from among a number of alternative sites

  5. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    Science.gov (United States)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  6. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    Science.gov (United States)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  7. Modelling multi-hazard hurricane damages on an urbanized coast with a Bayesian Network approach

    Science.gov (United States)

    van Verseveld, H.C.W.; Van Dongeren, A. R.; Plant, Nathaniel G.; Jäger, W.S.; den Heijer, C.

    2015-01-01

    Hurricane flood impacts to residential buildings in coastal zones are caused by a number of hazards, such as inundation, overflow currents, erosion, and wave attack. However, traditional hurricane damage models typically make use of stage-damage functions, where the stage is related to flooding depth only. Moreover, these models are deterministic and do not consider the large amount of uncertainty associated with both the processes themselves and with the predictions. This uncertainty becomes increasingly important when multiple hazards (flooding, wave attack, erosion, etc.) are considered simultaneously. This paper focusses on establishing relationships between observed damage and multiple hazard indicators in order to make better probabilistic predictions. The concept consists of (1) determining Local Hazard Indicators (LHIs) from a hindcasted storm with use of a nearshore morphodynamic model, XBeach, and (2) coupling these LHIs and building characteristics to the observed damages. We chose a Bayesian Network approach in order to make this coupling and used the LHIs ‘Inundation depth’, ‘Flow velocity’, ‘Wave attack’, and ‘Scour depth’ to represent flooding, current, wave impacts, and erosion related hazards.The coupled hazard model was tested against four thousand damage observations from a case site at the Rockaway Peninsula, NY, that was impacted by Hurricane Sandy in late October, 2012. The model was able to accurately distinguish ‘Minor damage’ from all other outcomes 95% of the time and could distinguish areas that were affected by the storm, but not severely damaged, 68% of the time. For the most heavily damaged buildings (‘Major Damage’ and ‘Destroyed’), projections of the expected damage underestimated the observed damage. The model demonstrated that including multiple hazards doubled the prediction skill, with Log-Likelihood Ratio test (a measure of improved accuracy and reduction in uncertainty) scores between 0.02 and 0

  8. A Bayesian geostatistical approach for evaluating the uncertainty of contaminant mass discharges from point sources

    Science.gov (United States)

    Troldborg, M.; Nowak, W.; Binning, P. J.; Bjerg, P. L.

    2012-12-01

    Estimates of mass discharge (mass/time) are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Mass discharge estimates are, however, prone to rather large uncertainties as they integrate uncertain spatial distributions of both concentration and groundwater flow velocities. For risk assessments or any other decisions that are being based on mass discharge estimates, it is essential to address these uncertainties. We present a novel Bayesian geostatistical approach for quantifying the uncertainty of the mass discharge across a multilevel control plane. The method decouples the flow and transport simulation and has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is based on conditional geostatistical simulation and accounts for i) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics (including the uncertainty in covariance functions), ii) measurement uncertainty, and iii) uncertain source zone geometry and transport parameters. The method generates multiple equally likely realizations of the spatial flow and concentration distribution, which all honour the measured data at the control plane. The flow realizations are generated by analytical co-simulation of the hydraulic conductivity and the hydraulic gradient across the control plane. These realizations are made consistent with measurements of both hydraulic conductivity and head at the site. An analytical macro-dispersive transport solution is employed to simulate the mean concentration distribution across the control plane, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed

  9. Predicting allergic contact dermatitis: a hierarchical structure activity relationship (SAR) approach to chemical classification using topological and quantum chemical descriptors

    Science.gov (United States)

    Basak, Subhash C.; Mills, Denise; Hawkins, Douglas M.

    2008-06-01

    A hierarchical classification study was carried out based on a set of 70 chemicals—35 which produce allergic contact dermatitis (ACD) and 35 which do not. This approach was implemented using a regular ridge regression computer code, followed by conversion of regression output to binary data values. The hierarchical descriptor classes used in the modeling include topostructural (TS), topochemical (TC), and quantum chemical (QC), all of which are based solely on chemical structure. The concordance, sensitivity, and specificity are reported. The model based on the TC descriptors was found to be the best, while the TS model was extremely poor.

  10. A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models

    International Nuclear Information System (INIS)

    Troffaes, Matthias C.M.; Walter, Gero; Kelly, Dana

    2014-01-01

    In a standard Bayesian approach to the alpha-factor model for common-cause failure, a precise Dirichlet prior distribution models epistemic uncertainty in the alpha-factors. This Dirichlet prior is then updated with observed data to obtain a posterior distribution, which forms the basis for further inferences. In this paper, we adapt the imprecise Dirichlet model of Walley to represent epistemic uncertainty in the alpha-factors. In this approach, epistemic uncertainty is expressed more cautiously via lower and upper expectations for each alpha-factor, along with a learning parameter which determines how quickly the model learns from observed data. For this application, we focus on elicitation of the learning parameter, and find that values in the range of 1 to 10 seem reasonable. The approach is compared with Kelly and Atwood's minimally informative Dirichlet prior for the alpha-factor model, which incorporated precise mean values for the alpha-factors, but which was otherwise quite diffuse. Next, we explore the use of a set of Gamma priors to model epistemic uncertainty in the marginal failure rate, expressed via a lower and upper expectation for this rate, again along with a learning parameter. As zero counts are generally less of an issue here, we find that the choice of this learning parameter is less crucial. Finally, we demonstrate how both epistemic uncertainty models can be combined to arrive at lower and upper expectations for all common-cause failure rates. Thereby, we effectively provide a full sensitivity analysis of common-cause failure rates, properly reflecting epistemic uncertainty of the analyst on all levels of the common-cause failure model

  11. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    Science.gov (United States)

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  12. A Bayesian Inferential Approach to Quantify the Transmission Intensity of Disease Outbreak

    Directory of Open Access Journals (Sweden)

    Adiveppa S. Kadi

    2015-01-01

    Full Text Available Background. Emergence of infectious diseases like influenza pandemic (H1N1 2009 has become great concern, which posed new challenges to the health authorities worldwide. To control these diseases various studies have been developed in the field of mathematical modelling, which is useful tool for understanding the epidemiological dynamics and their dependence on social mixing patterns. Method. We have used Bayesian approach to quantify the disease outbreak through key epidemiological parameter basic reproduction number (R0, using effective contacts, defined as sum of the product of incidence cases and probability of generation time distribution. We have estimated R0 from daily case incidence data for pandemic influenza A/H1N1 2009 in India, for the initial phase. Result. The estimated R0 with 95% credible interval is consistent with several other studies on the same strain. Through sensitivity analysis our study indicates that infectiousness affects the estimate of R0. Conclusion. Basic reproduction number R0 provides the useful information to the public health system to do some effort in controlling the disease by using mitigation strategies like vaccination, quarantine, and so forth.

  13. A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines

    Directory of Open Access Journals (Sweden)

    José Pablo Arias-Nicolás

    2015-06-01

    Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.

  14. A Hybrid Approach for Reliability Analysis Based on Analytic Hierarchy Process and Bayesian Network

    International Nuclear Information System (INIS)

    Zubair, Muhammad

    2014-01-01

    By using analytic hierarchy process (AHP) and Bayesian Network (BN) the present research signifies the technical and non-technical issues of nuclear accidents. The study exposed that the technical faults was one major reason of these accidents. Keep an eye on other point of view it becomes clearer that human behavior like dishonesty, insufficient training, and selfishness are also play a key role to cause these accidents. In this study, a hybrid approach for reliability analysis based on AHP and BN to increase nuclear power plant (NPP) safety has been developed. By using AHP, best alternative to improve safety, design, operation, and to allocate budget for all technical and non-technical factors related with nuclear safety has been investigated. We use a special structure of BN based on the method AHP. The graphs of the BN and the probabilities associated with nodes are designed to translate the knowledge of experts on the selection of best alternative. The results show that the improvement in regulatory authorities will decrease failure probabilities and increase safety and reliability in industrial area.

  15. Identification of failure type in corroded pipelines: a bayesian probabilistic approach.

    Science.gov (United States)

    Breton, T; Sanchez-Gheno, J C; Alamilla, J L; Alvarez-Ramirez, J

    2010-07-15

    Spillover of hazardous materials from transport pipelines can lead to catastrophic events with serious and dangerous environmental impact, potential fire events and human fatalities. The problem is more serious for large pipelines when the construction material is under environmental corrosion conditions, as in the petroleum and gas industries. In this way, predictive models can provide a suitable framework for risk evaluation, maintenance policies and substitution procedure design that should be oriented to reduce increased hazards. This work proposes a bayesian probabilistic approach to identify and predict the type of failure (leakage or rupture) for steel pipelines under realistic corroding conditions. In the first step of the modeling process, the mechanical performance of the pipe is considered for establishing conditions under which either leakage or rupture failure can occur. In the second step, experimental burst tests are used to introduce a mean probabilistic boundary defining a region where the type of failure is uncertain. In the boundary vicinity, the failure discrimination is carried out with a probabilistic model where the events are considered as random variables. In turn, the model parameters are estimated with available experimental data and contrasted with a real catastrophic event, showing good discrimination capacity. The results are discussed in terms of policies oriented to inspection and maintenance of large-size pipelines in the oil and gas industry. 2010 Elsevier B.V. All rights reserved.

  16. Detection of the spatiotemporal trends of mercury in Lake Erie fish communities: a Bayesian approach.

    Science.gov (United States)

    Azim, M Ekram; Kumarappah, Ananthavalli; Bhavsar, Satyendra P; Backus, Sean M; Arhonditsis, George

    2011-03-15

    The temporal trends of total mercury (THg) in four fish species in Lake Erie were evaluated based on 35 years of fish contaminant data. Our Bayesian statistical approach consists of three steps aiming to address different questions. First, we used the exponential and mixed-order decay models to assess the declining rates in four intensively sampled fish species, i.e., walleye (Stizostedion vitreum), yellow perch (Perca flavescens), smallmouth bass (Micropterus dolomieui), and white bass (Morone chrysops). Because the two models postulate monotonic decrease of the THg levels, we included first- and second-order random walk terms in our statistical formulations to accommodate nonmonotonic patterns in the data time series. Our analysis identified a recent increase in the THg concentrations, particularly after the mid-1990s. In the second step, we used double exponential models to quantify the relative magnitude of the THg trends depending on the type of data used (skinless-boneless fillet versus whole fish data) and the fish species examined. The observed THg concentrations were significantly higher in skinless boneless fillet than in whole fish portions, while the whole fish portions of walleye exhibited faster decline rates and slower rates of increase relative to the skinless boneless fillet data. Our analysis also shows lower decline rates and higher rates of increase in walleye relative to the other three fish species examined. The food web structural shifts induced by the invasive species (dreissenid mussels and round goby) may be associated with the recent THg trends in Lake Erie fish.

  17. Bayesian `hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements

    Science.gov (United States)

    Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.

    2000-07-01

    Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of `hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.

  18. Integrative Analysis of Transcription Factor Combinatorial Interactions Using a Bayesian Tensor Factorization Approach

    Science.gov (United States)

    Ye, Yusen; Gao, Lin; Zhang, Shihua

    2017-01-01

    Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions. PMID:29033978

  19. Risk assessment of pre-hospital trauma airway management by anaesthesiologists using the predictive Bayesian approach

    Directory of Open Access Journals (Sweden)

    Nakstad Anders R

    2010-04-01

    Full Text Available Abstract Introduction Endotracheal intubation (ETI has been considered an essential part of pre-hospital advanced life support. Pre-hospital ETI, however, is a complex intervention also for airway specialist like anaesthesiologists working as pre-hospital emergency physicians. We therefore wanted to investigate the quality of pre-hospital airway management by anaesthesiologists in severely traumatised patients and identify possible areas for improvement. Method We performed a risk assessment according to the predictive Bayesian approach, in a typical anaesthesiologist-manned Norwegian helicopter emergency medical service (HEMS. The main focus of the risk assessment was the event where a patient arrives in the emergency department without ETI despite a pre-hospital indication for it. Results In the risk assessment, we assigned a high probability (29% for the event assessed, that a patient arrives without ETI despite a pre-hospital indication. However, several uncertainty factors in the risk assessment were identified related to data quality, indications for use of ETI, patient outcome and need for special training of ETI providers. Conclusion Our risk assessment indicated a high probability for trauma patients with an indication for pre-hospital ETI not receiving it in the studied HEMS. The uncertainty factors identified in the assessment should be further investigated to better understand the problem assessed and consequences for the patients. Better quality of pre-hospital airway management data could contribute to a reduction of these uncertainties.

  20. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum

    2014-01-01

    Clipping is one of the simplest peak-to-average power ratio reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmission signal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhanced recovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as well as the clipping likelihood, is utilized for enhanced reconstruction. Furthermore, motivated by the nature of modern OFDM-based communication systems, we extend our clipping reconstruction approach to multiple antenna receivers and multi-user OFDM.We also address the problem of channel estimation from pilots contaminated by the clipping distortion. Numerical findings are presented that depict favorable results for the proposed scheme compared to the established sparse reconstruction schemes.

  1. Characterization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach.

    Science.gov (United States)

    Fu, Zhibiao; Baker, Daniel; Cheng, Aili; Leighton, Julie; Appelbaum, Edward; Aon, Juan

    2016-05-01

    The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799-812, 2016. © 2016 American Institute of Chemical Engineers.

  2. Equifinality of formal (DREAM) and informal (GLUE) Bayesian approaches in hydrologic modeling?

    NARCIS (Netherlands)

    Vrugt, J.A.; ter Braak, C.J.F.; Gupta, H.V.; Robinson, B.A.

    2009-01-01

    In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context,

  3. Intelligent condition monitoring of railway catenary systems : A Bayesian Network approach

    NARCIS (Netherlands)

    Wang, H.; Nunez Vicencio, Alfredo; Dollevoet, R.P.B.J.; Liu, Zhigang; Chen, Junwen; Spiryagin, Maksym; Gordon, Timothy; Cole, Colin; McSweeney, Tim

    2017-01-01

    This study proposes a Bayesian network (BN) dedicated for the intelligent condition monitoring of railway catenary systems. It combines five types of measurements related to catenary condition, namely the contact wire stagger, contact wire height, pantograph head displacement, pantograph head

  4. An Integrated Approach to Battery Health Monitoring using Bayesian Regression, Classification and State Estimation

    Data.gov (United States)

    National Aeronautics and Space Administration — The application of the Bayesian theory of managing uncertainty and complexity to regression and classification in the form of Relevance Vector Machine (RVM), and to...

  5. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum; Al-Rabah, Abdullatif R.; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2014-01-01

    at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate

  6. Robust Determinants of Growth in Asian Developing Economies: A Bayesian Panel Data Model Averaging Approach

    OpenAIRE

    LEON-GONZALEZ, Roberto; VINAYAGATHASAN, Thanabalasingam

    2013-01-01

    This paper investigates the determinants of growth in the Asian developing economies. We use Bayesian model averaging (BMA) in the context of a dynamic panel data growth regression to overcome the uncertainty over the choice of control variables. In addition, we use a Bayesian algorithm to analyze a large number of competing models. Among the explanatory variables, we include a non-linear function of inflation that allows for threshold effects. We use an unbalanced panel data set of 27 Asian ...

  7. Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration

    OpenAIRE

    Conner, Mary M.; Saunders, W. Carl; Bouwes, Nicolaas; Jordan, Chris

    2016-01-01

    Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ?20?% increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for as...

  8. National evaluation of Chinese coastal erosion to sea level rise using a Bayesian approach

    International Nuclear Information System (INIS)

    Zhan, Q; Fan, X; Du, X; Zhu, J

    2014-01-01

    In this paper a Causal Bayesian network is developed to predict decadal-scale shoreline evolution of China to sea-level rise. The Bayesian model defines relationships between 6 factors of Chinese coastal system such as coastal geomorphology, mean tide range, mean wave height, coastal slope, relative sea-level rise rate and shoreline erosion rate. Using the Bayesian probabilistic model, we make quantitative assessment of china's shoreline evolution in response to different future sea level rise rates. Results indicate that the probability of coastal erosion with high and very high rates increases from 28% to 32.3% when relative sea-level rise rates is 4∼6mm/a, and to 44.9% when relative sea-level rise rates is more than 6mm/a. A hindcast evaluation of the Bayesian model shows that the model correctly predicts 79.3% of the cases. Model test indicates that the Bayesian model shows higher predictive capabilities for stable coasts and very highly eroding coasts than moderately and highly eroding coasts. This study demonstrates that the Bayesian model is adapted to predicting decadal-scale Chinese coastal erosion associated with sea-level rise

  9. El Niño, Sea Surface Temperature Anomaly and Coral Bleaching in the South Atlantic: A Chain of Events Modeled With a Bayesian Approach

    Science.gov (United States)

    Lisboa, D. S.; Kikuchi, R. K. P.; Leão, Zelinda M. A. N.

    2018-04-01

    Coral bleaching represents one of the main climate-change related threats to reef ecosystems. This research represents a methodological alternative for modeling this phenomenon, focused on assessing uncertainties and complexities with a low number of observations. To develop this model, intermittent reef monitoring data from the largest reef complex in the South Atlantic collected over nine summers between 2000 and 2014 were used with remote sensing data to construct and train a bleaching seasonal prediction model. The Bayesian approach was used to construct the network as it is suitable for hierarchically organizing local thermal variables and combining them with El Niño indicators from the preceding winter to generate accurate bleaching predictions for the coming season. Network count information from six environmental indicators was used to calculate the probability of bleaching, which is mainly influenced by the combined information of two thermal indices; one thermal index is designed to track short period anomalies in the early summer that are capable of triggering bleaching (SST of five consecutive days), and the other index is responsible for tracking the accumulation of thermal stress over time, an index called degree heating trimester (DHT). In addition to developing the network, this study conducted the three tests of applicability proposed for model: 1- Perform the forecast of coral bleaching for the summer of 2016; 2- Investigate the role of turbidity during the bleaching episodes; and 3- Use the model information to identify areas with a lower predisposition to bleaching events.

  10. Bayesian data analysis for newcomers.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    This article explains the foundational concepts of Bayesian data analysis using virtually no mathematical notation. Bayesian ideas already match your intuitions from everyday reasoning and from traditional data analysis. Simple examples of Bayesian data analysis are presented that illustrate how the information delivered by a Bayesian analysis can be directly interpreted. Bayesian approaches to null-value assessment are discussed. The article clarifies misconceptions about Bayesian methods that newcomers might have acquired elsewhere. We discuss prior distributions and explain how they are not a liability but an important asset. We discuss the relation of Bayesian data analysis to Bayesian models of mind, and we briefly discuss what methodological problems Bayesian data analysis is not meant to solve. After you have read this article, you should have a clear sense of how Bayesian data analysis works and the sort of information it delivers, and why that information is so intuitive and useful for drawing conclusions from data.

  11. A top-down approach for fabricating free-standing bio-carbon supercapacitor electrodes with a hierarchical structure

    OpenAIRE

    Yingzhi Li; Qinghua Zhang; Junxian Zhang; Lei Jin; Xin Zhao; Ting Xu

    2015-01-01

    Biomass has delicate hierarchical structures, which inspired us to develop a cost-effective route to prepare electrode materials with rational nanostructures for use in high-performance storage devices. Here, we demonstrate a novel top-down approach for fabricating bio-carbon materials with stable structures and excellent diffusion pathways; this approach is based on carbonization with controlled chemical activation. The developed free-standing bio-carbon electrode exhibits a high specific ca...

  12. A novel approach for choosing summary statistics in approximate Bayesian computation.

    Science.gov (United States)

    Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas

    2012-11-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.

  13. Bayesian approach to the analysis of neutron Brillouin scattering data on liquid metals

    Science.gov (United States)

    De Francesco, A.; Guarini, E.; Bafile, U.; Formisano, F.; Scaccia, L.

    2016-08-01

    When the dynamics of liquids and disordered systems at mesoscopic level is investigated by means of inelastic scattering (e.g., neutron or x ray), spectra are often characterized by a poor definition of the excitation lines and spectroscopic features in general and one important issue is to establish how many of these lines need to be included in the modeling function and to estimate their parameters. Furthermore, when strongly damped excitations are present, commonly used and widespread fitting algorithms are particularly affected by the choice of initial values of the parameters. An inadequate choice may lead to an inefficient exploration of the parameter space, resulting in the algorithm getting stuck in a local minimum. In this paper, we present a Bayesian approach to the analysis of neutron Brillouin scattering data in which the number of excitation lines is treated as unknown and estimated along with the other model parameters. We propose a joint estimation procedure based on a reversible-jump Markov chain Monte Carlo algorithm, which efficiently explores the parameter space, producing a probabilistic measure to quantify the uncertainty on the number of excitation lines as well as reliable parameter estimates. The method proposed could turn out of great importance in extracting physical information from experimental data, especially when the detection of spectral features is complicated not only because of the properties of the sample, but also because of the limited instrumental resolution and count statistics. The approach is tested on generated data set and then applied to real experimental spectra of neutron Brillouin scattering from a liquid metal, previously analyzed in a more traditional way.

  14. Improved Membership Probability for Moving Groups: Bayesian and Machine Learning Approaches

    Science.gov (United States)

    Lee, Jinhee; Song, Inseok

    2018-01-01

    Gravitationally unbound loose stellar associations (i.e., young nearby moving groups: moving groups hereafter) have been intensively explored because they are important in planet and disk formation studies, exoplanet imaging, and age calibration. Among the many efforts devoted to the search for moving group members, a Bayesian approach (e.g.,using the code BANYAN) has become popular recently because of the many advantages it offers. However, the resultant membership probability needs to be carefully adopted because of its sensitive dependence on input models. In this study, we have developed an improved membership calculation tool focusing on the beta-Pic moving group. We made three improvements for building models used in BANYAN II: (1) updating a list of accepted members by re-assessing memberships in terms of position, motion, and age, (2) investigating member distribution functions in XYZ, and (3) exploring field star distribution functions in XYZUVW. Our improved tool can change membership probability up to 70%. Membership probability is critical and must be better defined. For example, our code identifies only one third of the candidate members in SIMBAD that are believed to be kinematically associated with beta-Pic moving group.Additionally, we performed cluster analysis of young nearby stars using an unsupervised machine learning approach. As more moving groups and their members are identified, the complexity and ambiguity in moving group configuration has been increased. To clarify this issue, we analyzed ~4,000 X-ray bright young stellar candidates. Here, we present the preliminary results. By re-identifying moving groups with the least human intervention, we expect to understand the composition of the solar neighborhood. Moreover better defined moving group membership will help us understand star formation and evolution in relatively low density environments; especially for the low-mass stars which will be identified in the coming Gaia release.

  15. A Bayesian belief network approach for assessing uncertainty in conceptual site models at contaminated sites

    Science.gov (United States)

    Thomsen, Nanna I.; Binning, Philip J.; McKnight, Ursula S.; Tuxen, Nina; Bjerg, Poul L.; Troldborg, Mads

    2016-05-01

    A key component in risk assessment of contaminated sites is in the formulation of a conceptual site model (CSM). A CSM is a simplified representation of reality and forms the basis for the mathematical modeling of contaminant fate and transport at the site. The CSM should therefore identify the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert opinion at different knowledge levels. The developed BBNs combine data from desktop studies and initial site investigations with expert opinion to assess which of the CSMs are more likely to reflect the actual site conditions. The method is demonstrated on a Danish field site, contaminated with chlorinated ethenes. Four different CSMs are developed by combining two contaminant source zone interpretations (presence or absence of a separate phase contamination) and two geological interpretations (fractured or unfractured clay till). The beliefs in each of the CSMs are assessed sequentially based on data from three investigation stages (a screening investigation, a more detailed investigation, and an expert consultation) to demonstrate that the belief can be updated as more information

  16. Impact of food, housing, and transportation insecurity on ART adherence: a hierarchical resources approach.

    Science.gov (United States)

    Cornelius, Talea; Jones, Maranda; Merly, Cynthia; Welles, Brandi; Kalichman, Moira O; Kalichman, Seth C

    2017-04-01

    Antiretroviral therapy (ART) has transformed HIV into a manageable illness. However, high levels of adherence must be maintained. Lack of access to basic resources (food, transportation, and housing) has been consistently associated with suboptimal ART adherence. Moving beyond such direct effects, this study takes a hierarchical resources approach in which the effects of access to basic resources on ART adherence are mediated through interpersonal resources (social support and care services) and personal resources (self-efficacy). Participants were 915 HIV-positive men and women living in Atlanta, GA, recruited from community centers and infectious disease clinics. Participants answered baseline questionnaires, and provided prospective data on ART adherence. Across a series of nested models, a consistent pattern emerged whereby lack of access to basic resources had indirect, negative effects on adherence, mediated through both lack of access to social support and services, and through lower treatment self-efficacy. There was also a significant direct effect of lack of access to transportation on adherence. Lack of access to basic resources negatively impacts ART adherence. Effects for housing instability and food insecurity were fully mediated through social support, access to services, and self-efficacy, highlighting these as important targets for intervention. Targeting service supports could be especially beneficial due to the potential to both promote adherence and to link clients with other services to supplement food, housing, and transportation. Inability to access transportation had a direct negative effect on adherence, suggesting that free or reduced cost transportation could positively impact ART adherence among disadvantaged populations.

  17. Association between prolonged breast-feeding and early childhood caries: a hierarchical approach.

    Science.gov (United States)

    Nunes, Ana Margarida Melo; Alves, Claudia Maria Coelho; Borba de Araújo, Fernando; Ortiz, Tânia Mara Lopes; Ribeiro, Marizélia Rodrigues Costa; Silva, Antônio Augusto Moura da; Ribeiro, Cecília Claudia Costa

    2012-12-01

    This study was conducted to investigate the association between prolonged breastfeeding and early childhood caries(ECC) with adjustment for important confounders, using hieraschical approach. This retrospective cohort study involved 260 low-income children (18-42 months). The number of decayed teeth was used as a measure of caries. Following a theoretical framework, the hierarchical model was built in a forward fashion, by adding the following levels in succession: level 1: age; level 2: social variables; level 3: health variables; level 4: behavioral variables; level 5: oral hygiene-related variables; level 6: oral hygiene quality measured by visible plaque; and level 7: contamination by mutans streptococci. Sequential forward multiple Poisson regression analysis was employed. Breast-feeding was not a risk factor for ECC after adjustment for some confounders (incidence density ratio, 1.15; 95% confidence interval, 0.84-1.59, P = 0.363). Prolonged breast-feeding was not a risk factor for ECC while age, high sucrose comption between main meals and the quality of oral higiene were associated with disease in children. © 2012 John Wiley & Sons A/S.

  18. Comparing hierarchical models via the marginalized deviance information criterion.

    Science.gov (United States)

    Quintero, Adrian; Lesaffre, Emmanuel

    2018-07-20

    Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Association between parental guilt and oral health problems in preschool children: a hierarchical approach.

    Science.gov (United States)

    Gomes, Monalisa Cesarino; Clementino, Marayza Alves; Pinto-Sarmento, Tassia Cristina de Almeida; Martins, Carolina Castro; Granville-Garcia, Ana Flávia; Paiva, Saul Martins

    2014-08-16

    Dental caries and traumatic dental injury (TDI) can play an important role in the emergence of parental guilt, since parents feel responsible for their child's health. The aim of the present study was to evaluate the influence of oral health problems among preschool children on parental guilt. A preschool-based, cross-sectional study was carried out with 832 preschool children between three and five years of age in the city of Campina Grande, Brazil. Parents/caregivers answered the Brazilian version of the Early Childhood Oral Health Impact Scale (B-ECOHIS). The item "parental guilt" was the dependent variable. Questionnaires addressing socio-demographic variables (child's sex, child's age, parent's/caregiver's age, mother's schooling, type of preschool and household income), history of toothache and health perceptions (general and oral) were also administered. Clinical exams for dental caries and TDI were performed by three dentists who had undergone a training and calibration exercise (Kappa: 0.85-0.90). Poisson hierarchical regression was used to determine the significance of associations between parental guilt and oral health problems (α = 5%). The multivariate model was carried out on three levels using a hierarchical approach from distal to proximal determinants: 1) socio-demographic aspects; 2) health perceptions; and 3) oral health problems. The frequency of parental guilt was 22.8%. The following variables were significantly associated with parental guilt: parental perception of child's oral health as poor (PR = 2.010; 95% CI: 1.502-2.688), history of toothache (PR = 2.344; 95% CI: 1.755-3.130), cavitated lesions (PR = 2.002; 95% CI: 1.388-2.887), avulsion/luxation (PR = 2.029; 95% CI: 1.141-3.610) and tooth discoloration (PR = 1.540; 95% CI: 1.169-2.028). Based on the present findings, parental guilt increases with the occurrence of oral health problems that require treatment, such as dental caries and TDI of greater severity. Parental perceptions of

  20. Hierarchical eco-restoration: A systematical approach to removal of COD and dissolved nutrients from an intensive agricultural area

    Energy Technology Data Exchange (ETDEWEB)

    Wu Yonghong, E-mail: yhwu@issas.ac.c [State Key Laboratory of Soil and Sustainable Agriculture, Institute of Soil Science, Chinese Academy of Sciences, 71 Beijing East Road, Nanjing 210008 (China); Graduate Schools, Chinese Academy of Sciences, Beijing 100049 (China); Hu Zhengyi [Graduate Schools, Chinese Academy of Sciences, Beijing 100049 (China); Yang Linzhang, E-mail: lzyang@issas.ac.c [State Key Laboratory of Soil and Sustainable Agriculture, Institute of Soil Science, Chinese Academy of Sciences, 71 Beijing East Road, Nanjing 210008 (China)

    2010-10-15

    A systematical approach based on hierarchical eco-restoration system for the simultaneous removal of COD and dissolved nutrients was proposed and applied in a complex residential-cropland area in Kunming, China from August 2006 to August 2008, where the self-purifying capacity of the agricultural ecosystem had been lost. The system includes four main parts: (1) fertilizer management and agricultural structure optimization, (2) nutrients reuse, (3) wastewater treatment, and (4) catchment restoration. The results showed that the average removal efficiencies were 90% for COD, 93% for ammonia, 94% for nitrate and 71% for total dissolved phosphorus (TDP) when the hierarchical eco-restoration agricultural system was in a relatively steady-state condition. The emergence of 14 species of macrophytes and 4 species of zoobenthos indicated that the growth conditions for the plankton were improved. The results demonstrated that this promising and environmentally benign hierarchical eco-restoration system could decrease the output of nutrients and reduce downstream eutrophication risk. - A systematical approach based on hierarchical eco-restoration system has proven highly effective for simultaneously removing COD and dissolved nutrients, decreasing the output of nutrients, and reducing the eutrophic risk of downstream surface waters.

  1. Hierarchical eco-restoration: A systematical approach to removal of COD and dissolved nutrients from an intensive agricultural area

    International Nuclear Information System (INIS)

    Wu Yonghong; Hu Zhengyi; Yang Linzhang

    2010-01-01

    A systematical approach based on hierarchical eco-restoration system for the simultaneous removal of COD and dissolved nutrients was proposed and applied in a complex residential-cropland area in Kunming, China from August 2006 to August 2008, where the self-purifying capacity of the agricultural ecosystem had been lost. The system includes four main parts: (1) fertilizer management and agricultural structure optimization, (2) nutrients reuse, (3) wastewater treatment, and (4) catchment restoration. The results showed that the average removal efficiencies were 90% for COD, 93% for ammonia, 94% for nitrate and 71% for total dissolved phosphorus (TDP) when the hierarchical eco-restoration agricultural system was in a relatively steady-state condition. The emergence of 14 species of macrophytes and 4 species of zoobenthos indicated that the growth conditions for the plankton were improved. The results demonstrated that this promising and environmentally benign hierarchical eco-restoration system could decrease the output of nutrients and reduce downstream eutrophication risk. - A systematical approach based on hierarchical eco-restoration system has proven highly effective for simultaneously removing COD and dissolved nutrients, decreasing the output of nutrients, and reducing the eutrophic risk of downstream surface waters.

  2. An Evolutionary Approach for Optimizing Hierarchical Multi-Agent System Organization

    OpenAIRE

    Shen, Zhiqi; Yu, Ling; Yu, Han

    2014-01-01

    It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm...

  3. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepú lveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-01-01

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  4. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data.

    Science.gov (United States)

    Sepúlveda, Nuno; Campino, Susana G; Assefa, Samuel A; Sutherland, Colin J; Pain, Arnab; Clark, Taane G

    2013-02-26

    The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model. Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates. In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data.

  5. A Poisson hierarchical modelling approach to detecting copy number variation in sequence coverage data

    KAUST Repository

    Sepúlveda, Nuno

    2013-02-26

    Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.

  6. Stakeholder perceptions of soil managements in the Canyoles watershed. A Bayesian Belief Network approach

    Science.gov (United States)

    Burguet Marimón, Maria; Quinn, Claire; Stringer, Lindsay; Cerdà, Artemi

    2017-04-01

    not fight against these problems as, on the one hand, they do not realize that non-sustainable soil erosion rates reduce soil fertility, and, on the other hand, there are several cultural issues that guide them towards bare soil as they find this as a tidy way to keep their properties. However, more research needs to be done on the BBN approach in order to be able to have a holistic approach regarding the vision of the farmers concerning the use of the different soil conservation strategies. Acknowledgements. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603498 (RECARE project). References Cain, J. 2001. Planning improvements in natural resources management: Guidelines for using Bayesian networks to support the planning and management of development programmes in the water sector and beyond. Centre for Ecology & Hydrology, Wallingford, UK. Marques, M. J., R. Bienes, J. Cuadrado, M. Ruiz-Colmenero, C. Barbero-Sierra, and A. Velasco. 2015. Analysing Perceptions Attitudes and Responses of Winegrowers about Sustainable Land Management in Central Spain. Land Degradation and Development 26 (5): 458-467. doi:10.1002/ldr.2355. Tengberg, A., F. Radstake, K. Zhang, and B. Dunn. 2016. Scaling Up of Sustainable Land Management in the Western People's Republic of China: Evaluation of a 10-Year Partnership. Land Degradation and Development 27 (2): 134-144. doi:10.1002/ldr.2270. Teshome, A., J. de Graaff, C. Ritsema, and M. Kassie. 2016. Farmers' Perceptions about the Influence of Land Quality, Land Fragmentation and Tenure Systems on Sustainable Land Management in the North Western Ethiopian Highlands. Land Degradation and Development 27 (4): 884-898. doi:10.1002/ldr.2298.

  7. Integrating Crowdsourced Data with a Land Cover Product: A Bayesian Data Fusion Approach

    Directory of Open Access Journals (Sweden)

    Sarah Gengler

    2016-06-01

    Full Text Available For many environmental applications, an accurate spatial mapping of land cover is a major concern. Currently, land cover products derived from satellite data are expected to offer a fast and inexpensive way of mapping large areas. However, the quality of these products may also largely depend on the area under study. As a result, it is common that various products disagree with each other, and the assessment of their respective quality still relies on ground validation datasets. Recently, crowdsourced data have been suggested as an alternate source of information that might help overcome this problem. However, crowdsourced data still remain largely discarded in scientific studies due to their inherent poor quality assurance. The aim of this paper is to present an efficient methodology that allows the user to code information brought by crowdsourced data even if no prior quality estimation is at hand and possibly to fuse this information with existing land cover products in order to improve their accuracy. It is first suggested that information brought by volunteers can be coded as a set of inequality constraints about the probabilities of the various land use classes at the visited places. This in turn allows estimating optimal probabilities based on a maximum entropy principle and to proceed afterwards with a spatial interpolation of these volunteers’ information. Finally, a Bayesian data fusion approach can be used for fusing multiple volunteers’ contributions with a remotely-sensed land cover product. This methodology is illustrated in this paper by focusing on the mapping of croplands in Ethiopia, where the aim is to improve the mapping of cropland as coming out from a land cover product with mitigated performances. It is shown how crowdsourced information can seriously improve the quality of the final product. The corresponding results also suggest that a prior assessing of remotely-sensed data quality can seriously improve the benefit

  8. Dissection of a Complex Disease Susceptibility Region Using a Bayesian Stochastic Search Approach to Fine Mapping.

    Directory of Open Access Journals (Sweden)

    Chris Wallace

    2015-06-01

    Full Text Available Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS and type 1 diabetes (T1D associations in the IL-2RA (CD25 gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3 and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data.

  9. A Bayesian approach to in silico blood-brain barrier penetration modeling.

    Science.gov (United States)

    Martins, Ines Filipa; Teixeira, Ana L; Pinheiro, Luis; Falcao, Andre O

    2012-06-25

    The human blood-brain barrier (BBB) is a membrane that protects the central nervous system (CNS) by restricting the passage of solutes. The development of any new drug must take into account its existence whether for designing new molecules that target components of the CNS or, on the other hand, to find new substances that should not penetrate the barrier. Several studies in the literature have attempted to predict BBB pene