Applying Hierarchical Bayesian Neural Network in Failure Time Prediction
Directory of Open Access Journals (Sweden)
Ling-Jing Kao
2012-01-01
Full Text Available With the rapid technology development and improvement, the product failure time prediction becomes an even harder task because only few failures in the product life tests are recorded. The classical statistical model relies on the asymptotic theory and cannot guarantee that the estimator has the finite sample property. To solve this problem, we apply the hierarchical Bayesian neural network (HBNN approach to predict the failure time and utilize the Gibbs sampler of Markov chain Monte Carlo (MCMC to estimate model parameters. In this proposed method, the hierarchical structure is specified to study the heterogeneity among products. Engineers can use the heterogeneity estimates to identify the causes of the quality differences and further enhance the product quality. In order to demonstrate the effectiveness of the proposed hierarchical Bayesian neural network model, the prediction performance of the proposed model is evaluated using multiple performance measurement criteria. Sensitivity analysis of the proposed model is also conducted using different number of hidden nodes and training sample sizes. The result shows that HBNN can provide not only the predictive distribution but also the heterogeneous parameter estimates for each path.
Bayesian model selection applied to artificial neural networks used for water resources modeling
Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.
2008-04-01
Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.
Bayesian modeling and classification of neural signals
Lewicki, Michael S.
1994-01-01
Signal processing and classification algorithms often have limited applicability resulting from an inaccurate model of the signal's underlying structure. We present here an efficient, Bayesian algorithm for modeling a signal composed of the superposition of brief, Poisson-distributed functions. This methodology is applied to the specific problem of modeling and classifying extracellular neural waveforms which are composed of a superposition of an unknown number of action potentials CAPs). ...
Congdon, Peter
2014-01-01
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU
Bayesian Recurrent Neural Network for Language Modeling.
Chien, Jen-Tzung; Ku, Yuan-Chu
2016-02-01
A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.
Option Pricing Using Bayesian Neural Networks
Pires, Michael Maio
2007-01-01
Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.
Email Spam Filter using Bayesian Neural Networks
Directory of Open Access Journals (Sweden)
Nibedita Chakraborty
2012-03-01
Full Text Available Nowadays, e-mail is widely becoming one of the fastest and most economical forms of communication but they are prone to be misused. One such misuse is the posting of unsolicited, unwanted e-mails known as spam or junk e-mails. This paper presents and discusses an implementation of a spam filtering system. The idea is to use a neural network which will be trained to recognize different forms of often used words in spam mails. The Bayesian ANN is trained with finite sample sizes to approximate the ideal observer. This strategy can provide improved filtering of Spam than existing Static Spam filters.
Parameter estimation of general regression neural network using Bayesian approach
Choir, Achmad Syahrul; Prasetyo, Rindang Bangun; Ulama, Brodjol Sutijo Suprih; Iriawan, Nur; Fitriasari, Kartika; Dokhi, Mohammad
2016-02-01
General Regression Neural Network (GRNN) has been applied in a large number of forecasting/prediction problem. Generally, there are two types of GRNN: GRNN which is based on kernel density; and Mixture Based GRNN (MBGRNN) which is based on adaptive mixture model. The main problem on GRNN modeling lays on how its parameters were estimated. In this paper, we propose Bayesian approach and its computation using Markov Chain Monte Carlo (MCMC) algorithms for estimating the MBGRNN parameters. This method is applied in simulation study. In this study, its performances are measured by using MAPE, MAE and RMSE. The application of Bayesian method to estimate MBGRNN parameters using MCMC is straightforward but it needs much iteration to achieve convergence.
Directory of Open Access Journals (Sweden)
Benjamin W. Y. Lo
2013-01-01
Full Text Available Objective. The novel clinical prediction approach of Bayesian neural networks with fuzzy logic inferences is created and applied to derive prognostic decision rules in cerebral aneurysmal subarachnoid hemorrhage (aSAH. Methods. The approach of Bayesian neural networks with fuzzy logic inferences was applied to data from five trials of Tirilazad for aneurysmal subarachnoid hemorrhage (3551 patients. Results. Bayesian meta-analyses of observational studies on aSAH prognostic factors gave generalizable posterior distributions of population mean log odd ratios (ORs. Similar trends were noted in Bayesian and linear regression ORs. Significant outcome predictors include normal motor response, cerebral infarction, history of myocardial infarction, cerebral edema, history of diabetes mellitus, fever on day 8, prior subarachnoid hemorrhage, admission angiographic vasospasm, neurological grade, intraventricular hemorrhage, ruptured aneurysm size, history of hypertension, vasospasm day, age and mean arterial pressure. Heteroscedasticity was present in the nontransformed dataset. Artificial neural networks found nonlinear relationships with 11 hidden variables in 1 layer, using the multilayer perceptron model. Fuzzy logic decision rules (centroid defuzzification technique denoted cut-off points for poor prognosis at greater than 2.5 clusters. Discussion. This aSAH prognostic system makes use of existing knowledge, recognizes unknown areas, incorporates one's clinical reasoning, and compensates for uncertainty in prognostication.
Nuclear charge radii: Density functional theory meets Bayesian neural networks
Utama, Raditya; Piekarewicz, Jorge
2016-01-01
The distribution of electric charge in atomic nuclei is fundamental to our understanding of the complex nuclear dynamics and a quintessential observable to validate nuclear structure models. We explore a novel approach that combines sophisticated models of nuclear structure with Bayesian neural networks (BNN) to generate predictions for the charge radii of thousands of nuclei throughout the nuclear chart. A class of relativistic energy density functionals is used to provide robust predictions for nuclear charge radii. In turn, these predictions are refined through Bayesian learning for a neural network that is trained using residuals between theoretical predictions and the experimental data. Although predictions obtained with density functional theory provide a fairly good description of experiment, our results show significant improvement (better than 40%) after BNN refinement. Moreover, these improved results for nuclear charge radii are supplemented with theoretical error bars. We have successfully demonst...
Introduction to applied Bayesian statistics and estimation for social scientists
Lynch, Scott M
2007-01-01
""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed
D-optimal Bayesian Interrogation for Parameter and Noise Identification of Recurrent Neural Networks
Poczos, Barnabas
2008-01-01
We introduce a novel online Bayesian method for the identification of a family of noisy recurrent neural networks (RNNs). We develop Bayesian active learning technique in order to optimize the interrogating stimuli given past experiences. In particular, we consider the unknown parameters as stochastic variables and use the D-optimality principle, also known as `\\emph{infomax method}', to choose optimal stimuli. We apply a greedy technique to maximize the information gain concerning network parameters at each time step. We also derive the D-optimal estimation of the additive noise that perturbs the dynamical system of the RNN. Our analytical results are approximation-free. The analytic derivation gives rise to attractive quadratic update rules.
Markov Chain Monte Carlo Bayesian Learning for Neural Networks
Goodrich, Michael S.
2011-01-01
Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.
A novel Bayesian learning method for information aggregation in modular neural networks
DEFF Research Database (Denmark)
Wang, Pan; Xu, Lida; Zhou, Shang-Ming;
2010-01-01
Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...
Bayesian Inference Applied to the Electromagnetic Inverse Problem
Schmidt, D M; Wood, C C; Schmidt, David M.; George, John S.
1998-01-01
We present a new approach to the electromagnetic inverse problem that explicitly addresses the ambiguity associated with its ill-posed character. Rather than calculating a single ``best'' solution according to some criterion, our approach produces a large number of likely solutions that both fit the data and any prior information that is used. While the range of the different likely results is representative of the ambiguity in the inverse problem even with prior information present, features that are common across a large number of the different solutions can be identified and are associated with a high degree of probability. This approach is implemented and quantified within the formalism of Bayesian inference which combines prior information with that from measurement in a common framework using a single measure. To demonstrate this approach, a general neural activation model is constructed that includes a variable number of extended regions of activation and can incorporate a great deal of prior informati...
Evidence for single top quark production using Bayesian neural networks
Energy Technology Data Exchange (ETDEWEB)
Kau, Daekwang [Florida State Univ., Tallahassee, FL (United States)
2007-01-01
We present results of a search for single top quark production in p$\\bar{p}$ collisions using a dataset of approximately 1 fb^{-1} collected with the D0 detector. This analysis considers the muon+jets and electron+jets final states and makes use of Bayesian neural networks to separate the expected signals from backgrounds. The observed excess is associated with a p-value of 0.081%, assuming the background-only hypothesis, which corresponds to an excess over background of 3.2 standard deviations for a Gaussian density. The p-value computed using the SM signal cross section of 2.9 pb is 1.6%, corresponding to an expected significance of 2.2 standard deviations. Assuming the observed excess is due to single top production, we measure a single top quark production cross section of σ(p$\\bar{p}$ → tb + X, tqb + X) = 4.4 ± 1.5 pb.
Applying neural networks in autonomous systems
Thornbrugh, Allison L.; Layne, J. D.; Wilson, James M., III
1992-03-01
Autonomous and teleautonomous operations have been defined in a variety of ways by different groups involved with remote robotic operations. For example, Conway describes architectures for producing intelligent actions in teleautonomous systems. Applying neural nets in such systems is similar to applying them in general. However, for autonomy, learning or learned behavior may become a significant system driver. Thus, artificial neural networks are being evaluated as components in fully autonomous and teleautonomous systems. Feed- forward networks may be trained to perform adaptive signal processing, pattern recognition, data fusion, and function approximation -- as in control subsystems. Certain components of particular autonomous systems become more amenable to implementation using a neural net due to a match between the net's attributes and desired attributes of the system component. Criteria have been developed for distinguishing such applications and then implementing them. The success of hardware implementation is a crucial part of this application evaluation process. Three basic applications of neural nets -- autoassociation, classification, and function approximation -- are used to exemplify this process and to highlight procedures that are followed during the requirements, design, and implementation phases. This paper assumes some familiarity with basic neural network terminology and concentrates upon the use of different neural network types while citing references that cover the underlying mathematics and related research.
Search for predictive generic model of aqueous solubility using Bayesian neural nets.
Bruneau, P
2001-01-01
Several predictive models of aqueous solubility have been published. They have good performances on the data sets which have been used for training the models, but usually these data sets do not contain many structures similar to the structures of interest to the drug research and their applicability in drug hunting is questionable. A very diverse data set has been gathered with compounds issued from literature reports and proprietary compounds. These compounds have been grouped in a so-called literature data set I, a proprietary data set II, and a mixed data set III formed by I and II. About 100 descriptors emphasizing surface properties were calculated for every compound. Bayesian learning of neural nets which cumulates the advantages of neural nets without having their weaknesses was used to select the most parsimonious models and train them, from I, II, and III. The models were established by either selecting the most efficient descriptors one by one using a modified Gram-Schmidt procedure (GS) or by simplifying a most complete model using automatic relevance procedure (ARD). The predictive ability of the models was accessed using validation data sets as much unrelated to the training sets as possible, using two new parameters: NDD(x,ref) the normalized smallest descriptor distance of a compound x to a reference data set and CD(x,mod) the combination of NDD(x,ref) with the dispersion of the Bayesian neural nets calculations. The results show that it is possible to obtain a generic predictive model from database I but that the diversity of database II is too restricted to give a model with good generalization ability and that the ARD method applied to the mixed database III gives the best predictive model. PMID:11749587
Applying Artificial Neural Networks for Face Recognition
Directory of Open Access Journals (Sweden)
Thai Hoang Le
2011-01-01
Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.
Nested sampling applied in Bayesian room-acoustics decay analysis.
Jasa, Tomislav; Xiang, Ning
2012-11-01
Room-acoustic energy decays often exhibit single-rate or multiple-rate characteristics in a wide variety of rooms/halls. Both the energy decay order and decay parameter estimation are of practical significance in architectural acoustics applications, representing two different levels of Bayesian probabilistic inference. This paper discusses a model-based sound energy decay analysis within a Bayesian framework utilizing the nested sampling algorithm. The nested sampling algorithm is specifically developed to evaluate the Bayesian evidence required for determining the energy decay order with decay parameter estimates as a secondary result. Taking the energy decay analysis in architectural acoustics as an example, this paper demonstrates that two different levels of inference, decay model-selection and decay parameter estimation, can be cohesively accomplished by the nested sampling algorithm. PMID:23145609
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Bayesian estimation inherent in a Mexican-hat-type neural network
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Bayesian estimation inherent in a Mexican-hat-type neural network.
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Applying Bayesian networks in practical customer satisfaction studies
Jaronski, W.; Bloemer, J.M.M.; Vanhoof, K.; Wets, G.
2004-01-01
This chapter presents an application of Bayesian network technology in an empirical customer satisfaction study. The findings of the study should provide insight to the importance of product/service dimensions in terms of the strength of their influence on overall (dis)satisfaction. To this end we a
Applying Bayesian belief networks in rapid response situations
Energy Technology Data Exchange (ETDEWEB)
Gibson, William L [Los Alamos National Laboratory; Deborah, Leishman, A. [Los Alamos National Laboratory; Van Eeckhout, Edward [Los Alamos National Laboratory
2008-01-01
The authors have developed an enhanced Bayesian analysis tool called the Integrated Knowledge Engine (IKE) for monitoring and surveillance. The enhancements are suited for Rapid Response Situations where decisions must be made based on uncertain and incomplete evidence from many diverse and heterogeneous sources. The enhancements extend the probabilistic results of the traditional Bayesian analysis by (1) better quantifying uncertainty arising from model parameter uncertainty and uncertain evidence, (2) optimizing the collection of evidence to reach conclusions more quickly, and (3) allowing the analyst to determine the influence of the remaining evidence that cannot be obtained in the time allowed. These extended features give the analyst and decision maker a better comprehension of the adequacy of the acquired evidence and hence the quality of the hurried decisions. They also describe two example systems where the above features are highlighted.
Bayesian Nonparametric Shrinkage Applied to Cepheid Star Oscillations.
Berger, James; Jefferys, William; Müller, Peter
2012-01-01
Bayesian nonparametric regression with dependent wavelets has dual shrinkage properties: there is shrinkage through a dependent prior put on functional differences, and shrinkage through the setting of most of the wavelet coefficients to zero through Bayesian variable selection methods. The methodology can deal with unequally spaced data and is efficient because of the existence of fast moves in model space for the MCMC computation. The methodology is illustrated on the problem of modeling the oscillations of Cepheid variable stars; these are a class of pulsating variable stars with the useful property that their periods of variability are strongly correlated with their absolute luminosity. Once this relationship has been calibrated, knowledge of the period gives knowledge of the luminosity. This makes these stars useful as "standard candles" for estimating distances in the universe. PMID:24368873
A Bayesian framework for simultaneously modeling neural and behavioral data
B.M. Turner; B.U. Forstmann; E.-J. Wagenmakers; S.D. Brown; P.B. Sederberg; M. Steyvers
2013-01-01
Scientists who study cognition infer underlying processes either by observing behavior (e.g., response times, percentage correct) or by observing neural activity (e.g., the BOLD response). These two types of observations have traditionally supported two separate lines of study. The first is led by c
Mani-Varnosfaderani, Ahmad; Kanginejad, Atefeh; Gilany, Kambiz; Valadkhani, Abolfazl
2016-10-12
The present work deals with the development of a new baseline correction method based on the comparative learning capabilities of artificial neural networks. The developed method uses the Bayes probability theorem for prevention of the occurrence of the over-fitting and finding a generalized baseline. The developed method has been applied on simulated and real metabolomic gas-chromatography (GC) and Raman data sets. The results revealed that the proposed method can be used to handle different types of baselines with cave, convex, curvelinear, triangular and sinusoidal patterns. For further evaluation of the performances of this method, it has been compared with benchmarking baseline correction methods such as corner-cutting (CC), morphological weighted penalized least squares (MPLS), adaptive iteratively-reweighted penalized least squares (airPLS) and iterative polynomial fitting (iPF). In order to compare the methods, the projected difference resolution (PDR) criterion has been calculated for the data before and after the baseline correction procedure. The calculated values of PDR after the baseline correction using iBRANN, airPLS, MPLS, iPF and CC algorithms for the GC metabolomic data were 4.18, 3.64, 3.88, 1.88 and 3.08, respectively. The obtained results in this work demonstrated that the developed iterative Bayesian regularized neural network (iBRANN) method in this work thoroughly detects the baselines and is superior over the CC, MPLS, airPLS and iPF techniques. A graphical user interface has been developed for the suggested algorithm and can be used for easy implementation of the iBRANN algorithm for the correction of different chromatography, NMR and Raman data sets. PMID:27662759
Identification of information tonality based on Bayesian approach and neural networks
Lande, D. V.
2008-01-01
A model of the identification of information tonality, based on Bayesian approach and neural networks was described. In the context of this paper tonality means positive or negative tone of both the whole information and its parts which are related to particular concepts. The method, its application is presented in the paper, is based on statistic regularities connected with the presence of definite lexemes in the texts. A distinctive feature of the method is its simplicity and versatility. A...
Applying Bayesian Inference to Galileon Solutions of the Muon Problem
Lamm, Henry
2016-01-01
We derive corrections to atomic energy levels from disformal couplings in Galileon theories. Through Bayesian inference, we constrain the cut-off radii and Galileon scale via these corrections. To connect different atomic systems, we assume the various cut-off radii related by a 1-parameter family of solutions. This introduces a new parameter $\\alpha$ which is also constrained. In this model, we predict shifts to muonic helium of $\\delta E_{He^3}=1.97^{+9.28}_{-1.87}$ meV and $\\delta E_{He^4}=1.69^{+9.25}_{-1.61}$ meV.
Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points
Directory of Open Access Journals (Sweden)
K. K. Aggarwal
2005-01-01
Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.
Applying Bayesian ideas to the development of medical guidelines.
Landrum, M B; Normand, S L
1999-01-30
Measurements of the quality of health care, in particular the underuse and overuse of medical therapies and diagnostic tests, often involve employment of medical practice guidelines to assess the appropriateness of treatments. This paper presents a case study of a Bayesian analysis for the development of medical guidelines based on expert opinion, using ordinal categorical rater data. We develop guidelines for the use of coronary angiography following an acute myocardial infarction (AMI) for 890 clinical indications using statistical models fit to appropriateness ratings obtained from a nine-member expert panel. The main foci of our analyses were on the estimation of an appropriateness score for each of the clinical indications, an associated measure of precision, and functions of the underlying score. We considered two classes of models that assume the ratings are either in the form of grouped normal data or are ungrouped variables arising from a normal distribution, while permitting rater effects and indication heterogeneity in both. We estimated models using Markov chain Monte Carlo methods and constructed indices quantifying appropriateness based on posterior probabilities of selected model parameters. We compared our model-based approach to the standard approach currently employed in medical guideline development and found that the standard approach correctly identified 99 per cent of the appropriate indications while overestimating appropriateness 18 per cent of the time compared to our model-based approach. PMID:10028134
Directory of Open Access Journals (Sweden)
Abdelkrim Moussaoui
2006-01-01
Full Text Available The authors discuss the combination of an Artificial Neural Network (ANN with analytical models to improve the performance of the prediction model of finishing rolling force in hot strip rolling mill process. The suggested model was implemented using Bayesian Evidence based training algorithm. It was found that the Bayesian Evidence based approach provided a superior and smoother fit to the real rolling mill data. Completely independent set of real rolling data were used to evaluate the capacity of the fitted ANN model to predict the unseen regions of data. As a result, test rolls obtained by the suggested hybrid model have shown high prediction quality comparatively to the usual empirical prediction models.
Bayesian networks applied to process diagnostics. Applications in energy industry
Energy Technology Data Exchange (ETDEWEB)
Widarsson, Bjoern (ed.); Karlsson, Christer; Dahlquist, Erik [Maelardalen Univ., Vaesteraas (Sweden); Nielsen, Thomas D.; Jensen, Finn V. [Aalborg Univ. (Denmark)
2004-10-01
Uncertainty in process operation occurs frequently in heat and power industry. This makes it hard to find the occurrence of an abnormal process state from a number of process signals (measurements) or find the correct cause to an abnormality. Among several other methods, Bayesian Networks (BN) is a method to build a model which can handle uncertainty in both process signals and the process itself. The purpose of this project is to investigate the possibilities to use BN for fault detection and diagnostics in combined heat and power industries through execution of two different applications. Participants from Aalborg University represent the knowledge of BN and participants from Maelardalen University have the experience from modelling heat and power applications. The co-operation also includes two energy companies; Elsam A/S (Nordjyllandsverket) and Maelarenergi AB (Vaesteraas CHP-plant), where the two applications are made with support from the plant personnel. The project ended out in two quite different applications. At Nordjyllandsverket, an application based (due to the lack of process knowledge) on pure operation data is build with capability to detect an abnormal process state in a coal mill. Detection is made through a conflict analysis when entering process signals into a model built by analysing the operation database. The application at Maelarenergi is built with a combination of process knowledge and operation data and can detect various faults caused by the fuel. The process knowledge is used to build a causal network structure and the structure is then trained by data from the operation database. Both applications are made as off-online applications, but they are ready for being run on-line. The performance of fault detection and diagnostics are good, but a lack of abnormal process states with known cause reduces the evaluation possibilities. Advantages with combining expert knowledge of the process with operation data are the possibility to represent
Identification of information tonality based on Bayesian approach and neural networks
Lande, D V
2008-01-01
A model of the identification of information tonality, based on Bayesian approach and neural networks was described. In the context of this paper tonality means positive or negative tone of both the whole information and its parts which are related to particular concepts. The method, its application is presented in the paper, is based on statistic regularities connected with the presence of definite lexemes in the texts. A distinctive feature of the method is its simplicity and versatility. At present ideologically similar approaches are widely used to control spam.
Artificial Neural Network applied to lightning flashes
Gin, R. B.; Guedes, D.; Bianchi, R.
2013-05-01
The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a
Bayesian adaptive combination of short-term wind speed forecasts from neural network models
Energy Technology Data Exchange (ETDEWEB)
Li, Gong; Shi, Jing; Zhou, Junyi [Department of Industrial and Manufacturing Engineering, North Dakota State University, Dept. 2485, PO Box 6050, Fargo, ND 58108 (United States)
2011-01-15
Short-term wind speed forecasting is of great importance for wind farm operations and the integration of wind energy into the power grid system. Adaptive and reliable methods and techniques of wind speed forecasts are urgently needed in view of the stochastic nature of wind resource varying from time to time and from site to site. This paper presents a robust two-step methodology for accurate wind speed forecasting based on Bayesian combination algorithm, and three neural network models, namely, adaptive linear element network (ADALINE), backpropagation (BP) network, and radial basis function (RBF) network. The hourly average wind speed data from two North Dakota sites are used to demonstrate the effectiveness of the proposed approach. The results indicate that, while the performances of the neural networks are not consistent in forecasting 1-h-ahead wind speed for the two sites or under different evaluation metrics, the Bayesian combination method can always provide adaptive, reliable and comparatively accurate forecast results. The proposed methodology provides a unified approach to tackle the challenging model selection issue in wind speed forecasting. (author)
Zhang, Xuesong
2011-11-01
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework (BNN-PIS) to incorporate the uncertainties associated with parameters, inputs, and structures into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform BNNs that only consider uncertainties associated with parameters and model structures. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters shows that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of and interactions among different uncertainty sources is expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting. © 2011 Elsevier B.V.
Delayed switching applied to memristor neural networks
Energy Technology Data Exchange (ETDEWEB)
Wang, Frank Z.; Yang Xiao; Lim Guan [Future Computing Group, School of Computing, University of Kent, Canterbury (United Kingdom); Helian Na [School of Computer Science, University of Hertfordshire, Hatfield (United Kingdom); Wu Sining [Xyratex, Havant (United Kingdom); Guo Yike [Department of Computing, Imperial College, London (United Kingdom); Rashid, Md Mamunur [CERN, Geneva (Switzerland)
2012-04-01
Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.
Applying neural networks to ultrasonographic texture recognition
Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn
1993-09-01
A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.
Robust Bayesian decision theory applied to optimal dosage.
Abraham, Christophe; Daurès, Jean-Pierre
2004-04-15
We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. PMID:15057878
Elsheikh, Ahmed H.
2014-02-01
A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Energy Technology Data Exchange (ETDEWEB)
Saini, Lalit Mohan [Department of Electrical Engineering, National Institute of Technology, Kurukshetra, Haryana 136119 (India)
2008-07-15
Up to 7 days ahead electrical peak load forecasting has been done using feed forward neural network based on Steepest descent, Bayesian regularization, Resilient and adaptive backpropagation learning methods, by incorporating the effect of eleven weather parameters and past peak load information. To avoid trapping of network into a state of local minima, the optimization of user-defined parameters viz., learning rate and error goal has been performed. The sliding window concept has been incorporated for selection of training data set. It was then reduced as per relevant selection according to the day type and season for which the forecast is made. To reduce the dimensionality of input matrix, the Principal Component Analysis method of factor extraction or correlation analysis technique has been used and their performance has been compared. The resultant data set was used for training of three-layered neural network. In order to increase the learning speed, the weights and biases were initialized according to Nguyen and Widrow method. To avoid over fitting, early stopping of training was done at the minimum validation error. (author)
Bayesian survival analysis modeling applied to sensory shelf life of foods
Calle, M. Luz; Hough, Guillermo; Curia, Ana; Gómez, Guadalupe
2006-01-01
Data from sensory shelf-life studies can be analyzed using survival statistical methods. The objective of this research was to introduce Bayesian methodology to sensory shelf-life studies and discuss its advantages in relation to classical (frequentist) methods. A specific algorithm which incorporates the interval censored data from shelf-life studies is presented. Calculations were applied to whole-fat and fat-free yogurt, each tasted by 80 consumers who answered ‘‘yes’’ or ‘‘no’’ t...
A Bayesian Model for Plan Recognition in RTS Games applied to StarCraft
Synnaeve, Gabriel
2011-01-01
The task of keyhole (unobtrusive) plan recognition is central to adaptive game AI. "Tech trees" or "build trees" are the core of real-time strategy (RTS) game strategic (long term) planning. This paper presents a generic and simple Bayesian model for RTS build tree prediction from noisy observations, which parameters are learned from replays (game logs). This unsupervised machine learning approach involves minimal work for the game developers as it leverage players' data (com- mon in RTS). We applied it to StarCraft1 and showed that it yields high quality and robust predictions, that can feed an adaptive AI.
Li Honglian; Fang Hong; Tang Ju; Zhang Jun; Zhang Jing
2013-01-01
It is difficult to accurately reckon vehicle position for vehicle navigation system (VNS) during GPS outages, a novel prediction algorithm of dead reckon (DR) position error is put forward, which based on Bayesian regularization back-propagation (BRBP) neural network. DR, GPS position data are first de-noised and compared at different stationary wavelet transformation (SWT) decomposition level, and DR position error data are acquired after the SWT coefficients differences are reconstructed. A...
International Nuclear Information System (INIS)
We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented
Decentralized Neural Backstepping Control Applied to a Robot Manipulator
Directory of Open Access Journals (Sweden)
Ramon Garcia-Hernandez
2013-01-01
Full Text Available This paper presents a discrete‐time decentralized control scheme for trajectory tracking of a two degrees of freedom (DOF robot manipulator. A high order neural network (HONN is used to approximate a decentralized control law designed by the backstepping technique as applied to a block strict feedback form (BSFF. The weights for each neural network are adapted online by an extended Kalman filter training algorithm. The motion for each joint is controlled independently using only local angular position and velocity measurements. The stability analysis for the closed‐loop system via the Lyapunov approach is included. Finally, the real‐time results show the feasibility of the proposed control scheme using a robot manipulator.
Bayesian flux balance analysis applied to a skeletal muscle metabolic model.
Heino, Jenni; Tunyan, Knarik; Calvetti, Daniela; Somersalo, Erkki
2007-09-01
In this article, the steady state condition for the multi-compartment models for cellular metabolism is considered. The problem is to estimate the reaction and transport fluxes, as well as the concentrations in venous blood when the stoichiometry and bound constraints for the fluxes and the concentrations are given. The problem has been addressed previously by a number of authors, and optimization-based approaches as well as extreme pathway analysis have been proposed. These approaches are briefly discussed here. The main emphasis of this work is a Bayesian statistical approach to the flux balance analysis (FBA). We show how the bound constraints and optimality conditions such as maximizing the oxidative phosphorylation flux can be incorporated into the model in the Bayesian framework by proper construction of the prior densities. We propose an effective Markov chain Monte Carlo (MCMC) scheme to explore the posterior densities, and compare the results with those obtained via the previously studied linear programming (LP) approach. The proposed methodology, which is applied here to a two-compartment model for skeletal muscle metabolism, can be extended to more complex models.
Chaotic neural network applied to two-dimensional motion control.
Yoshida, Hiroyuki; Kurata, Shuhei; Li, Yongtao; Nara, Shigetoshi
2010-03-01
Chaotic dynamics generated in a chaotic neural network model are applied to 2-dimensional (2-D) motion control. The change of position of a moving object in each control time step is determined by a motion function which is calculated from the firing activity of the chaotic neural network. Prototype attractors which correspond to simple motions of the object toward four directions in 2-D space are embedded in the neural network model by designing synaptic connection strengths. Chaotic dynamics introduced by changing system parameters sample intermediate points in the high-dimensional state space between the embedded attractors, resulting in motion in various directions. By means of adaptive switching of the system parameters between a chaotic regime and an attractor regime, the object is able to reach a target in a 2-D maze. In computer experiments, the success rate of this method over many trials not only shows better performance than that of stochastic random pattern generators but also shows that chaotic dynamics can be useful for realizing robust, adaptive and complex control function with simple rules.
Institute of Scientific and Technical Information of China (English)
CHI Wen-xue; WANG Jin-feng; LI Xin-hu; ZHENG Xiao-ying; LIAO Yi-lan
2007-01-01
Objective: To estimate the prevalence rates of neural tube defects (NTDs) in Heshun County, Shanxi Province, China by Bayesian smoothing technique. Methods: A total of 80 infants in the study area who were diagnosed with NTDs were analyzed. Two mapping techniques were then used. Firstly, the GIS software ArcGIS was used to map the crude prevalence rates. Secondly,the data were smoothed by the method of empirical Bayes estimation. Results: The classical statistical approach produced an extremely dishomogeneous map, while the Bayesian map was much smoother and more interpretable. The maps produced by the Bayesian technique indicate the tendency of villages in the southeastern region to produce higher prevalence or risk values. Conclusions: The Bayesian smoothing technique addresses the issue of heterogeneity in the population at risk and it is therefore recommended for use in explorative mapping of birth defects. This approach provides procedures to identify spatial health risk levels and assists in generating hypothesis that will be investigated in further detail.
Bayesian Estimation Applied to Multiple Species: Towards cosmology with a million supernovae
Kunz, M; Hlozek, R; Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee
2006-01-01
Observed data is often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian Estimation Applied to Multiple Species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being `pure' is known. We discuss the application of BEAMS to future Type Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae lightcurves without spectra. The multi-band lightcurves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, P, of being Ia, BEAMS delivers parameter constraints equal to NP spectroscopically-confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. ...
Study of Single Top Quark Production Using Bayesian Neural Networks With D0 Detector at the Tevatron
Energy Technology Data Exchange (ETDEWEB)
Joshi, Jyoti [Panjab Univ., Chandigarh (India)
2012-01-01
Top quark, the heaviest and most intriguing among the six known quarks, can be created via two independent production mechanisms in {\\pp} collisions. The primary mode, strong {\\ttbar} pair production from a $gtt$ vertex, was used by the {\\d0} and CDF collaborations to establish the existence of the top quark in March 1995. The second mode is the electroweak production of a single top quark or antiquark, which has been observed recently in March 2009. Since single top quarks are produced at hadron colliders through a $Wtb$ vertex, thereby provide a direct probe of the nature of $Wtb$ coupling and of the Cabibbo-Kobayashi-Maskawa matrix element, $V_{tb}$. So this mechanism provides a sensitive probe for several, standard model and beyond standard model, parameters such as anomalous $Wtb$ couplings. In this thesis, we measure the cross section of the electroweak produced top quark in three different production modes, $s+t$, $s$ and $t$-channels using a technique based on the Bayesian neural networks. This technique is applied for analysis of the 5.4 $fb^{-1}$ of data collected by the {\\d0} detector. From a comparison of the Bayesian neural networks discriminants between data and the signal-background model using Bayesian statistics, the cross sections of the top quark produced through the electroweak mechanism have been measured as: \\[\\sigma(p\\bar{p}→tb+X,tqb+X) = 3.11^{+0.77}_{-0.71}\\;\\rm pb\\] \\[\\sigma(p\\bar{p}→tb+X) = 0.72^{+0.44}_{-0.43}\\;\\rm pb\\] \\[\\sigma(p\\bar{p}→tqb+X) = 2.92^{+0.87}_{-0.73}\\;\\rm pb\\] % The $s+t$-channel has a gaussian significance of $4.7\\sigma$, the $s$-channel $0.9\\sigma$ and the $t$-channel~$4.7\\sigma$. The results are consistent with the standard model predictions within one standard deviation. By combining these results with the results for two other analyses (using different MVA techniques) improved results \\[\\sigma(p\\bar{p}→tb+X,tqb+X) = 3.43^{+0.73}_{-0.74}\\;\\rm pb\\] \\[\\sigma
Bai, Ying; Lan, JieQin; Gao, WeiWei
2016-01-01
A toy detector array has been designed to simulate the detection of cosmic rays in Extended Air Shower(EAS) Experiments for ground-based TeV Astrophysics. The primary energies of protons from the Monte-Carlo simulation have been reconstructed by the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment\\cite{lhaaso-ma}, respectively. The result of the energy reconstruction using BNNs has been compared with the one using the standard method. Compared to the standard method, the energy resolutions are significantly improved using BNNs. And the improvement is more obvious for the high energy protons than the low energy ones.
Bai, Y.; Xu, Y.; Pan, J.; Lan, J. Q.; Gao, W. W.
2016-07-01
A toy detector array is designed to detect a shower generated by the interaction between a TeV cosmic ray and the atmosphere. In the present paper, the primary energies of showers detected by the detector array are reconstructed with the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment [1], respectively. Compared to the standard method, the energy resolutions are significantly improved using the BNNs. And the improvement is more obvious for the high energy showers than the low energy ones.
Directory of Open Access Journals (Sweden)
Le Riche R.
2010-06-01
Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools
Directory of Open Access Journals (Sweden)
Li Honglian
2013-07-01
Full Text Available It is difficult to accurately reckon vehicle position for vehicle navigation system (VNS during GPS outages, a novel prediction algorithm of dead reckon (DR position error is put forward, which based on Bayesian regularization back-propagation (BRBP neural network. DR, GPS position data are first de-noised and compared at different stationary wavelet transformation (SWT decomposition level, and DR position error data are acquired after the SWT coefficients differences are reconstructed. A neural network to mimic position error property is trained with back-propagation algorithm, and the algorithm is improved for improving its generalization by Bayesian regularization theory. During GPS outages, the established prediction algorithm predictes DR position errors, and provides precise position for VNS through DR position error data updating DR position data. The simulation results show the positioning precision of the BRBP algorithm is best among the presented prediction algorithms such as simple DR and adaptive linear network, and a precise mathematical model of navigation sensors isn’t established.
Neural Networks Applied to Thermal Damage Classification in Grinding Process
Spadotto, Marcelo M.; Aguiar, Paulo Roberto de; Sousa, Carlos C. P.; Bianchi, Eduardo C.
2008-01-01
The utilization of neural network of type multi-layer perceptron using the back-propagation algorithm guaranteed very good results. Tests carried out in order to optimize the learning capacity of neural networks were of utmost importance in the training phase, where the optimum values for the number of neurons of the hidden layer, learning rate and momentum for each structure were determined. Once the architecture of the neural network was established with those optimum values, the mean squar...
Galbraith, Craig S.; Merrill, Gregory B.; Kline, Doug M.
2012-01-01
In this study we investigate the underlying relational structure between student evaluations of teaching effectiveness (SETEs) and achievement of student learning outcomes in 116 business related courses. Utilizing traditional statistical techniques, a neural network analysis and a Bayesian data reduction and classification algorithm, we find…
Topographic factor analysis: a Bayesian model for inferring brain networks from neural data.
Directory of Open Access Journals (Sweden)
Jeremy R Manning
Full Text Available The neural patterns recorded during a neuroscientific experiment reflect complex interactions between many brain regions, each comprising millions of neurons. However, the measurements themselves are typically abstracted from that underlying structure. For example, functional magnetic resonance imaging (fMRI datasets comprise a time series of three-dimensional images, where each voxel in an image (roughly reflects the activity of the brain structure(s-located at the corresponding point in space-at the time the image was collected. FMRI data often exhibit strong spatial correlations, whereby nearby voxels behave similarly over time as the underlying brain structure modulates its activity. Here we develop topographic factor analysis (TFA, a technique that exploits spatial correlations in fMRI data to recover the underlying structure that the images reflect. Specifically, TFA casts each brain image as a weighted sum of spatial functions. The parameters of those spatial functions, which may be learned by applying TFA to an fMRI dataset, reveal the locations and sizes of the brain structures activated while the data were collected, as well as the interactions between those structures.
Mota-Hernandez, Cinthya; Alvarado-Corona, Rafael
2014-01-01
Tectonic earthquakes of high magnitude can cause considerable losses in terms of human lives, economic and infrastructure, among others. According to an evaluation published by the U.S. Geological Survey, 30 is the number of earthquakes which have greatly impacted Mexico from the end of the XIX century to this one. Based upon data from the National Seismological Service, on the period between January 1, 2006 and May 1, 2013 there have occurred 5,826 earthquakes which magnitude has been greater than 4.0 degrees on the Richter magnitude scale (25.54% of the total of earthquakes registered on the national territory), being the Pacific Plate and the Cocos Plate the most important ones. This document describes the development of an Artificial Neural Network (ANN) based on the radial topology which seeks to generate a prediction with an error margin lower than 20% which can inform about the probability of a future earthquake one of the main questions is: can artificial neural networks be applied in seismic forecast...
ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS
Directory of Open Access Journals (Sweden)
Tamara Gvozdenović
2007-06-01
Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.
Applying Neural Networks to Prices Prediction of Crude Oil Futures
Directory of Open Access Journals (Sweden)
John Wei-Shan Hu
2012-01-01
Full Text Available The global economy experienced turbulent uneasiness for the past five years owing to large increases in oil prices and terrorist’s attacks. While accurate prediction of oil price is important but extremely difficult, this study attempts to accurately forecast prices of crude oil futures by adopting three popular neural networks methods including the multilayer perceptron, the Elman recurrent neural network (ERNN, and recurrent fuzzy neural network (RFNN. Experimental results indicate that the use of neural networks to forecast the crude oil futures prices is appropriate and consistent learning is achieved by employing different training times. Our results further demonstrate that, in most situations, learning performance can be improved by increasing the training time. Moreover, the RFNN has the best predictive power and the MLP has the worst one among the three underlying neural networks. This finding shows that, under ERNNs and RFNNs, the predictive power improves when increasing the training time. The exceptional case involved BPNs, suggesting that the predictive power improves when reducing the training time. To sum up, we conclude that the RFNN outperformed the other two neural networks in forecasting crude oil futures prices.
Directory of Open Access Journals (Sweden)
Hacene MELLAH
2016-07-01
Full Text Available The objective of this paper is to develop an Artificial Neural Network (ANN model to estimate simultaneously, parameters and state of a brushed DC machine. The proposed ANN estimator is novel in the sense that his estimates simultaneously temperature, speed and rotor resistance based only on the measurement of the voltage and current inputs. Many types of ANN estimators have been designed by a lot of researchers during the last two decades. Each type is designed for a specific application. The thermal behavior of the motor is very slow, which leads to large amounts of data sets. The standard ANN use often Multi-Layer Perceptron (MLP with Levenberg-Marquardt Backpropagation (LMBP, among the limits of LMBP in the case of large number of data, so the use of MLP based on LMBP is no longer valid in our case. As solution, we propose the use of Cascade-Forward Neural Network (CFNN based Bayesian Regulation backpropagation (BRBP. To test our estimator robustness a random white-Gaussian noise has been added to the sets. The proposed estimator is in our viewpoint accurate and robust.
Bayesian estimation for a parametric Markov Renewal model applied to seismic data
Epifani, I.; Ladelli, L.; Pievatolo, A.
2014-01-01
This paper presents a complete methodology for Bayesian inference on a semi-Markov process, from the elicitation of the prior distribution, to the computation of posterior summaries, including a guidance for its implementation. The inter-occurrence times (conditional on the transition between two given states) are assumed to be Weibull-distributed. We examine the elicitation of the joint prior density of the shape and scale parameters of the Weibull distributions, deriving a specific class of...
LVQ and backpropagation neural networks applied to NASA SSME data
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Bayesian Statistical Analysis Applied to NAA Data for Neutron Flux Spectrum Determination
Chiesa, D.; Previtali, E.; Sisti, M.
2014-04-01
In this paper, we present a statistical method, based on Bayesian statistics, to evaluate the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation analysis (NAA) experiment [A. Borio di Tigliole et al., Absolute flux measurement by NAA at the Pavia University TRIGA Mark II reactor facilities, ENC 2012 - Transactions Research Reactors, ISBN 978-92-95064-14-0, 22 (2012)] performed at the TRIGA Mark II reactor of Pavia University (Italy). In order to evaluate the neutron flux spectrum, subdivided in energy groups, we must solve a system of linear equations containing the grouped cross sections and the activation rate data. We solve this problem with Bayesian statistical analysis, including the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, is used to define the problem statistical model and solve it. The energy group fluxes and their uncertainties are then determined with great accuracy and the correlations between the groups are analyzed. Finally, the dependence of the results on the prior distribution choice and on the group cross section data is investigated to confirm the reliability of the analysis.
Humphrey, Greer B.; Gibbs, Matthew S.; Dandy, Graeme C.; Maier, Holger R.
2016-09-01
Monthly streamflow forecasts are needed to support water resources decision making in the South East of South Australia, where baseflow represents a significant proportion of the total streamflow and soil moisture and groundwater are important predictors of runoff. To address this requirement, the utility of a hybrid monthly streamflow forecasting approach is explored, whereby simulated soil moisture from the GR4J conceptual rainfall-runoff model is used to represent initial catchment conditions in a Bayesian artificial neural network (ANN) statistical forecasting model. To assess the performance of this hybrid forecasting method, a comparison is undertaken of the relative performances of the Bayesian ANN, the GR4J conceptual model and the hybrid streamflow forecasting approach for producing 1-month ahead streamflow forecasts at three key locations in the South East of South Australia. Particular attention is paid to the quantification of uncertainty in each of the forecast models and the potential for reducing forecast uncertainty by using the hybrid approach is considered. Case study results suggest that the hybrid models developed in this study are able to take advantage of the complementary strengths of both the ANN models and the GR4J conceptual models. This was particularly the case when forecasting high flows, where the hybrid models were shown to outperform the two individual modelling approaches in terms of the accuracy of the median forecasts, as well as reliability and resolution of the forecast distributions. In addition, the forecast distributions generated by the hybrid models were up to 8 times more precise than those based on climatology; thus, providing a significant improvement on the information currently available to decision makers.
Convolutional Neural Networks Applied to House Numbers Digit Classification
Sermanet, Pierre; LeCun, Yann
2012-01-01
We classify digits of real-world house numbers using convolutional neural networks (ConvNets). ConvNets are hierarchical feature learning neural networks whose structure is biologically inspired. Unlike many popular vision approaches that are hand-designed, ConvNets can automatically learn a unique set of features optimized for a given task. We augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2% error improvement). Furthermore, we analyze the benefits of different pooling methods and multi-stage features in ConvNets. The source code and a tutorial are available at eblearn.sf.net.
Radial basis function neural networks applied to NASA SSME data
Wheeler, Kevin R.; Dhawan, Atam P.
1993-01-01
This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.
Applying Neural Network in Evaporative Cooler Performance Prediction
Institute of Scientific and Technical Information of China (English)
QIANG Tian-wei; SHEN Heng-gen; HUANG Xiang; XUAN Yong-mei
2007-01-01
The back-propagation (BP) neural network is created to predict the performance of a direct evaporative cooling (DEC) air conditioner with GLASdek pads. The experiment data about the performance of the DEC air conditioner are obtained. Some experiment data are used to train the network until these data can approximate a function, then, simulate the network with the remanent data. The predicted result shows satisfying effects.
Sparse Bayesian framework applied to 3D super-resolution reconstruction in fetal brain MRI
Becerra, Laura C.; Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Fetal Magnetic Resonance (FMR) is an imaging technique that is becoming increasingly important as allows assessing brain development and thus make an early diagnostic of congenital abnormalities, spatial resolution is limited by the short acquisition time and the unpredictable fetus movements, in consequence the resulting images are characterized by non-parallel projection planes composed by anisotropic voxels. The sparse Bayesian representation is a flexible strategy which is able to model complex relationships. The Super-resolution is approached as a regression problem, the main advantage is the capability to learn data relations from observations. Quantitative performance evaluation was carried out using synthetic images, the proposed method demonstrates a better reconstruction quality compared with standard interpolation approach. The presented method is a promising approach to improve the information quality related with the 3-D fetal brain structure. It is important because allows assessing brain development and thus make an early diagnostic of congenital abnormalities.
Bayesian Source Separation Applied to Identifying Complex Organic Molecules in Space
Knuth, Kevin H; Choinsky, Joshua; Maunu, Haley A; Carbon, Duane F
2014-01-01
Emission from a class of benzene-based molecules known as Polycyclic Aromatic Hydrocarbons (PAHs) dominates the infrared spectrum of star-forming regions. The observed emission appears to arise from the combined emission of numerous PAH species, each with its unique spectrum. Linear superposition of the PAH spectra identifies this problem as a source separation problem. It is, however, of a formidable class of source separation problems given that different PAH sources potentially number in the hundreds, even thousands, and there is only one measured spectral signal for a given astrophysical site. Fortunately, the source spectra of the PAHs are known, but the signal is also contaminated by other spectral sources. We describe our ongoing work in developing Bayesian source separation techniques relying on nested sampling in conjunction with an ON/OFF mechanism enabling simultaneous estimation of the probability that a particular PAH species is present and its contribution to the spectrum.
Artificial metaplasticity neural network applied to credit scoring.
Marcano-Cedeño, Alexis; Marin-de-la-Barcena, A; Jimenez-Trillo, J; Piñuela, J A; Andina, D
2011-08-01
The assessment of the risk of default on credit is important for financial institutions. Different Artificial Neural Networks (ANN) have been suggested to tackle the credit scoring problem, however, the obtained error rates are often high. In the search for the best ANN algorithm for credit scoring, this paper contributes with the application of an ANN Training Algorithm inspired by the neurons' biological property of metaplasticity. This algorithm is especially efficient when few patterns of a class are available, or when information inherent to low probability events is crucial for a successful application, as weight updating is overemphasized in the less frequent activations than in the more frequent ones. Two well-known and readily available such as: Australia and German data sets has been used to test the algorithm. The results obtained by AMMLP shown have been superior to state-of-the-art classification algorithms in credit scoring.
The Theory of Neural Cognition Applied to Robotics
Directory of Open Access Journals (Sweden)
Claude F. Touzet
2015-06-01
Full Text Available The Theory of neural Cognition (TnC states that the brain does not process information, it only represents information (i.e., it is 'only' a memory. The TnC explains how a memory can become an actor pursuing various goals, and proposes explanations concerning the implementation of a large variety of cognitive abilities, such as attention, memory, language, planning, intelligence, emotions, motivation, pleasure, consciousness and personality. The explanatory power of this new framework extends further though, to tackle special psychological states such as hypnosis, the placebo effect and sleep, and brain diseases such as autism, Alzheimer’s disease and schizophrenia. The most interesting findings concern robotics: because the TnC considers the cortical column to be the key cognitive unit (instead of the neuron, it reduces the requirements for a brain implementation to only 160,000 units (instead of 86 billion. A robot exhibiting human-like cognitive abilities is therefore within our reach.
Neural network applied to elemental archaeological Marajoara ceramic compositions
Energy Technology Data Exchange (ETDEWEB)
Toyota, Rosimeiri G.; Munita, Casimiro S., E-mail: rosimeiritoy@yahoo.com.b, E-mail: camunita@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Boscarioli, Clodis, E-mail: boscarioli@gmail.co [Universidade Estadual do Oeste do Parana, Cascavel, PR (Brazil). Centro de Ciencias Exatas e Tecnologicas. Colegiado de Informatica; Hernandez, Emilio D.M., E-mail: boscarioli@gmail.co [Universidade de Sao Paulo (USP), SP (Brazil). Escola Politecnica; Neves, Eduardo G.; Demartini, Celia C., E-mail: eduardo@pq.cnpq.b [Museu de Arqueologia e Etnologia (MAE/USP), Sao Paulo, SP (Brazil)
2009-07-01
In the last decades several analytical techniques have been used in archaeological ceramics studies. However, instrumental neutron activation analysis, INAA, employing gamma-ray spectrometry seems to be the most suitable technique because it is a simple analytical method in its purely instrumental form. The purpose of this work was to determine the concentration of Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, and Zn in 160 original marajoara ceramic fragments by INAA. Marajoara ceramics culture was sophisticated and well developed. This culture reached its peak during the V and XIV centuries in Marajo Island located on the Amazon River delta area in Brazil. The purpose of the quantitative data was to identify compositionally homogeneous groups within the database. Having this in mind, the data set was first converted to base-10 logarithms to compensate for the differences in magnitude between major elements and trace elements, and also to yield a closer to normal distribution for several trace elements. After that, the data were analyzed using the Mahalanobis distance and using the lambda Wilks as critical value to identify the outliers. The similarities among the samples were studied by means of cluster analysis, principal components analysis and discriminant analysis. Additional confirmation of these groups was made by using elemental concentration bivariate plots. The results showed that there were two very well defined groups in the data set. In addition, the database was studied using artificial neural network with unsupervised learning strategy known as self-organizing maps to classify the marajoara ceramics. The experiments carried out showed that self-organizing maps artificial neural network is capable of discriminating ceramic fragments like multivariate statistical methods, and, again the results showed that the database was formed by two groups. (author)
Neural network applied to elemental archaeological Marajoara ceramic compositions
International Nuclear Information System (INIS)
In the last decades several analytical techniques have been used in archaeological ceramics studies. However, instrumental neutron activation analysis, INAA, employing gamma-ray spectrometry seems to be the most suitable technique because it is a simple analytical method in its purely instrumental form. The purpose of this work was to determine the concentration of Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, and Zn in 160 original marajoara ceramic fragments by INAA. Marajoara ceramics culture was sophisticated and well developed. This culture reached its peak during the V and XIV centuries in Marajo Island located on the Amazon River delta area in Brazil. The purpose of the quantitative data was to identify compositionally homogeneous groups within the database. Having this in mind, the data set was first converted to base-10 logarithms to compensate for the differences in magnitude between major elements and trace elements, and also to yield a closer to normal distribution for several trace elements. After that, the data were analyzed using the Mahalanobis distance and using the lambda Wilks as critical value to identify the outliers. The similarities among the samples were studied by means of cluster analysis, principal components analysis and discriminant analysis. Additional confirmation of these groups was made by using elemental concentration bivariate plots. The results showed that there were two very well defined groups in the data set. In addition, the database was studied using artificial neural network with unsupervised learning strategy known as self-organizing maps to classify the marajoara ceramics. The experiments carried out showed that self-organizing maps artificial neural network is capable of discriminating ceramic fragments like multivariate statistical methods, and, again the results showed that the database was formed by two groups. (author)
Applying deep neural networks to HEP job classification
Wang, L.; Shi, J.; Yan, X.
2015-12-01
The cluster of IHEP computing center is a middle-sized computing system which provides 10 thousands CPU cores, 5 PB disk storage, and 40 GB/s IO throughput. Its 1000+ users come from a variety of HEP experiments. In such a system, job classification is an indispensable task. Although experienced administrator can classify a HEP job by its IO pattern, it is unpractical to classify millions of jobs manually. We present how to solve this problem with deep neural networks in a supervised learning way. Firstly, we built a training data set of 320K samples by an IO pattern collection agent and a semi-automatic process of sample labelling. Then we implemented and trained DNNs models with Torch. During the process of model training, several meta-parameters was tuned with cross-validations. Test results show that a 5- hidden-layer DNNs model achieves 96% precision on the classification task. By comparison, it outperforms a linear model by 8% precision.
A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS
Directory of Open Access Journals (Sweden)
A.J.G. da Cruz
1997-12-01
Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data
Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy
Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.
1998-01-01
We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.
Introduction to Bayesian statistics
Bolstad, William M
2016-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Bayesian artificial intelligence
Korb, Kevin B
2003-01-01
As the power of Bayesian techniques has become more fully realized, the field of artificial intelligence has embraced Bayesian methodology and integrated it to the point where an introduction to Bayesian techniques is now a core course in many computer science programs. Unlike other books on the subject, Bayesian Artificial Intelligence keeps mathematical detail to a minimum and covers a broad range of topics. The authors integrate all of Bayesian net technology and learning Bayesian net technology and apply them both to knowledge engineering. They emphasize understanding and intuition but also provide the algorithms and technical background needed for applications. Software, exercises, and solutions are available on the authors' website.
Bayesian artificial intelligence
Korb, Kevin B
2010-01-01
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente
Energy Technology Data Exchange (ETDEWEB)
Cai, C. [CEA, LIST, 91191 Gif-sur-Yvette, France and CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Rodet, T.; Mohammad-Djafari, A. [CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Legoupil, S. [CEA, LIST, 91191 Gif-sur-Yvette (France)
2013-11-15
Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also
Simon; Nazmul Karim M
2001-01-01
Probabilistic neural networks (PNNs) were used in conjunction with the Gompertz model for bacterial growth to classify the lag, logarithmic, and stationary phases in a batch process. Using the fermentation time and the optical density of diluted cell suspensions, sampled from a culture of Bacillus subtilis, PNNs enabled a reliable determination of the growth phases. Based on a Bayesian decision strategy, the Gompertz based PNN used newly proposed definition of the lag and logarithmic phases to estimate the latent, logarithmic and stationary phases. This network topology has the potential for use with on-line turbidimeter for the automation and control of cultivation processes.
Energy Technology Data Exchange (ETDEWEB)
Jammes, B.; Marpinard, J.C.
1995-12-31
Neural networks are scarcely applied to power electronics. This attempt includes two different topics: optimal control and computerized simulation. The learning has been performed through output error feedback. For implementation, a buck converter has been used as a voltage pulse generator. (D.L.) 7 refs.
The Bayesian statistical decision theory applied to the optimization of generating set maintenance
International Nuclear Information System (INIS)
The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs
Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE
Energy Technology Data Exchange (ETDEWEB)
Correa, R. [Universidad Tecnologica Metropolitana, Departamento de Fisica, Av. Jose Pedro Alessandri 1242, Nunoa, Santiago (Chile)]. E-mail: rcorrea@utem.cl; Chesta, M.A. [Universidad Nacional de Cordoba, Facultad de Matematica, Astronomia y Fisica, Medina Allende s/n Ciudad Universitaria, 5000 Cordoba (Argentina)]. E-mail: chesta@famaf.unc.edu.ar; Morales, J.R. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: rmorales@uchile.cl; Dinator, M.I. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: mdinator@uchile.cl; Requena, I. [Universidad de Granada, Departamento de Ciencias de la Computacion e Inteligencia Artificial, Daniel Saucedo Aranda s/n, 18071 Granada (Spain)]. E-mail: requena@decsai.ugr.es; Vila, I. [Universidad de Chile, Facultad de Ciencias, Departamento de Ecologia, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: limnolog@uchile.cl
2006-08-15
An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses.
Bar-On, Lynn; Desloovere, Kaat; Molenaers, Guy; Harlaar, J.; Kindt, T; Aertbeliën, Erwin
2014-01-01
Clinical assessment of spasticity is compromised by the difficulty to distinguish neural from non-neural components of increased joint torque. Quantifying the contributions of each of these components is crucial to optimize the selection of anti-spasticity treatments such as botulinum toxin (BTX). The aim of this study was to compare different biomechanical parameters that quantify the neural contribution to ankle joint torque measured during manually-applied passive stretches to the gastrocs...
Ant colony optimization and neural networks applied to nuclear power plant monitoring
International Nuclear Information System (INIS)
A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)
Ant colony optimization and neural networks applied to nuclear power plant monitoring
Energy Technology Data Exchange (ETDEWEB)
Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)
International Nuclear Information System (INIS)
We present, in this paper, a new unsupervised method for joint image super-resolution and separation between smooth and point sources. For this purpose, we propose a Bayesian approach with a Markovian model for the smooth part and Student’s t-distribution for point sources. All model and noise parameters are considered unknown and should be estimated jointly with images. However, joint estimators (joint MAP or posterior mean) are intractable and an approximation is needed. Therefore, a new gradient-like variational Bayesian method is applied to approximate the true posterior by a free-form separable distribution. A parametric form is obtained by approximating marginals but with form parameters that are mutually dependent. Their optimal values are achieved by iterating them till convergence. The method was tested by the model-generated data and a real dataset from the Herschel space observatory. (paper)
APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION
Directory of Open Access Journals (Sweden)
Khuat Thanh Tung
2016-04-01
Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.
Directory of Open Access Journals (Sweden)
Mike Lonergan
2011-01-01
Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.
Mike Lonergan; Dave Thompson; Len Thomas; Callan Duck
2011-01-01
1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimat...
Frühwirth-Schnatter, Sylvia
1990-01-01
In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)
Directory of Open Access Journals (Sweden)
Hanene Rouabeh
2016-02-01
Full Text Available This Paper presents a new hybrid technique for digit recognition applied to the speed limit sign recognition task. The complete recognition system consists in the detection and recognition of the speed signs in RGB images. A pretreatment is applied to extract the pictogram from a detected circular road sign, and then the task discussed in this work is employed to recognize digit candidates. To realize a compromise between performances, reduced execution time and optimized memory resources, the developed method is based on a conjoint use of a Neural Network and a Decision Tree. A simple Network is employed firstly to classify the extracted candidates into three classes and secondly a small Decision Tree is charged to determine the exact information. This combination is used to reduce the size of the Network as well as the memory resources utilization. The evaluation of the technique and the comparison with existent methods show the effectiveness.
A. Korattikara; V. Rathod; K. Murphy; M. Welling
2015-01-01
We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), e.g., for applications involving bandits or active learning. One simple ap
Energy Technology Data Exchange (ETDEWEB)
Boulanger, Jean-Philippe [LODYC, UMR CNRS/IRD/UPMC, Tour 45-55/Etage 4/Case 100, UPMC, Paris Cedex 05 (France); University of Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Martinez, Fernando; Segura, Enrique C. [University of Buenos Aires, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina)
2007-02-15
Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere-Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the &apos
Wilson, M T; Fung, P K; Robinson, P A; Shemmell, J; Reynolds, J N J
2016-08-01
The calcium dependent plasticity (CaDP) approach to the modeling of synaptic weight change is applied using a neural field approach to realistic repetitive transcranial magnetic stimulation (rTMS) protocols. A spatially-symmetric nonlinear neural field model consisting of populations of excitatory and inhibitory neurons is used. The plasticity between excitatory cell populations is then evaluated using a CaDP approach that incorporates metaplasticity. The direction and size of the plasticity (potentiation or depression) depends on both the amplitude of stimulation and duration of the protocol. The breaks in the inhibitory theta-burst stimulation protocol are crucial to ensuring that the stimulation bursts are potentiating in nature. Tuning the parameters of a spike-timing dependent plasticity (STDP) window with a Monte Carlo approach to maximize agreement between STDP predictions and the CaDP results reproduces a realistically-shaped window with two regions of depression in agreement with the existing literature. Developing understanding of how TMS interacts with cells at a network level may be important for future investigation. PMID:27259518
International Nuclear Information System (INIS)
In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)
Energy Technology Data Exchange (ETDEWEB)
Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Pereira, Iraci Martinez; Silva, Antonio Teixeira e, E-mail: martinez@ipen.b, E-mail: teixeira@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)
Directory of Open Access Journals (Sweden)
Lawrence N Kazembe
Full Text Available Despite remarkable gains in life expectancy and declining mortality in the 21st century, in many places mostly in developing countries, adult mortality has increased in part due to HIV/AIDS or continued abject poverty levels. Moreover many factors including behavioural, socio-economic and demographic variables work simultaneously to impact on risk of mortality. Understanding risk factors of adult mortality is crucial towards designing appropriate public health interventions. In this paper we proposed a structured additive two-part random effects regression model for adult mortality data. Our proposal assumed two processes: (i whether death occurred in the household (prevalence part, and (ii number of reported deaths, if death did occur (severity part. The proposed model specification therefore consisted of two generalized linear mixed models (GLMM with correlated random effects that permitted structured and unstructured spatial components at regional level. Specifically, the first part assumed a GLMM with a logistic link and the second part explored a count model following either a Poisson or negative binomial distribution. The model was used to analyse adult mortality data of 25,793 individuals from the 2006/2007 Namibian DHS data. Inference is based on the Bayesian framework with appropriate priors discussed.
Applying long short-term memory recurrent neural networks to intrusion detection
Directory of Open Access Journals (Sweden)
Ralf C. Staudemeyer
2015-07-01
Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.
Directory of Open Access Journals (Sweden)
Satish Kumar
2012-09-01
Full Text Available In this study, a method of artificial neural network applied for the solution of inverse kinematics of 2-link serial chain manipulator. The method is multilayer perceptrons neural network has applied. This unsupervised method learns the functional relationship between input (Cartesian space and output (joint space based on a localized adaptation of the mapping, by using the manipulator itself under joint control and adapting the solution based on a comparison between the resulting locations of the manipulator's end effectors in Cartesian space with the desired location. Even when a manipulator is not available; the approach is still valid if the forward kinematic equations are used as a model of the manipulator. The forward kinematic equations always have a unique solution, and the resulting Neural net can be used as a starting point for further refinement when the manipulator does become available. Artificial neural network especially MLP are used to learn the forward and the inverse kinematic equations of two degrees freedom robot arm. A set of some data sets were first generated as per the formula equation for this the input parameter X and Y coordinates in inches. Using these data sets was basis for the training and evaluation or testing the MLP model. Out of the sets data points, maximum were used as training data and some were used for testing for MLP. Backpropagation algorithm was used for training the network and for updating the desired weights. In this work epoch based training method was applied.
Institute of Scientific and Technical Information of China (English)
XU Min; ZENG Guang-ming; XU Xin-yi; HUANG Guo-he; SUN Wei; JIANG Xiao-yun
2005-01-01
Bayesian regularized BP neural network(BRBPNN) technique was applied in the chlorophyll-a prediction of Nanzui water area in Dongting Lake. Through BP network interpolation method, the input and output samples of the network were obtained. After the selection of input variables using stepwise/multiple linear regression method in SPSS 11.0 software, the BRBPNN model was established between chlorophyll-a and environmental parameters, biological parameters. The achieved optimal network structure was 3-11-1 with the correlation coefficients and the mean square errors for the training set and the test set as 0.999 and 0.00078426, 0.981 and 0.0216 respectively. The sum of square weights between each input neuron and the hidden layer of optimal BRBPNN models of different structures indicated that the effect of individual input parameter on chlorophyll-a declined in the order of alga amount > secchi disc depth(SD) > electrical conductivity (EC) . Additionally, it also demonstrated that the contributions of these three factors were the maximal for the change of chlorophyll-a concentration, total phosphorus(TP) and total nitrogen(TN) were the minimal. All the results showed that BRBPNN model was capable of automated regularization parameter selection and thus it may ensure the excellent generation ability and robustness. Thus, this study laid the foundation for the application of BRBPNN model in the analysis of aquatic ecological data(chlorophyll-a prediction) and the explanation about the effective eutrophication treatment measures for Nanzui water area in Dongting Lake.
Boltzmann learning of parameters in cellular neural networks
DEFF Research Database (Denmark)
Hansen, Lars Kai
1992-01-01
The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified ...... by unsupervised adaptation of an image segmentation cellular network. The learning rule is applied to adaptive segmentation of satellite imagery......The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified...
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Development of an intelligent system for tool wear monitoring applying neural networks
Directory of Open Access Journals (Sweden)
A. Antić
2005-12-01
Full Text Available Purpose: The objective of the researches presented in the paper is to investigate, in laboratory conditions, the application possibilities of the proposed system for tool wear monitoring in hard turning, using modern tools and artificial intelligence (AI methods.Design/methodology/approach: On the basic theoretical principles and the use of computing methods of simulation and neural network training, as well as the conducted experiments, have been directed to investigate the adequacy of the setting.Findings: The paper presents tool wear monitoring for hard turning for certain types of neural network configurations where there are preconditions for up building with dynamic neural networks.Research limitations/implications: Future researches should include the integration of the proposed system into CNC machine, instead of the current separate system, which would provide synchronisation between the system and the machine, i.e. the appropriate reaction by the machine after determining excessive tool wear.Practical implications: Practical application of the conducted research is possible with certain restrictions and supplement of adequate number of experimental researches which would be directed towards certain combinations of machining materials and tools for which neural networks are trained.Originality/value: The contribution of the conducted research is observed in one possible view of the tool monitoring system model and it’s designing on modular principle, and principle building neural network.
Directory of Open Access Journals (Sweden)
Shokoufe Tayyebi
2013-01-01
Full Text Available Biosurfactants are surface active compounds produced by various microorganisms. Production of biosurfactants via fermentation of immiscible wastes has the dual benefit of creating economic opportunities for manufacturers, while improving environmental health. A predictor system, recommended in such processes, must be scaled-up. Hence, four neural networks were developed for the dynamic modeling of the biosurfactant production kinetics, in presence of soybean oil or refinery wastes including acid oil, deodorizer distillate and soap stock. Each proposed feed forward neural network consists of three layers which are not fully connected. The input and output data for the training and validation of the neural network models were gathered from batch fermentation experiments. The proposed neural network models were evaluated by three statistical criteria (R2, RMSE and SE. The typical regression analysis showed high correlation coefficients greater than 0.971, demonstrating that the neural network is an excellent estimator for prediction of biosurfactant production kinetic data in a two phase liquid-liquid batch fermentation system. In addition, sensitivity analysis indicates that residual oil has the significant effect (i.e. 49% on the biosurfactant in the process.
A modular neural network scheme applied to fault diagnosis in electric power systems.
Flores, Agustín; Quiles, Eduardo; García, Emilio; Morant, Francisco; Correcher, Antonio
2014-01-01
This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.
A Modular Neural Network Scheme Applied to Fault Diagnosis in Electric Power Systems
Directory of Open Access Journals (Sweden)
Agustín Flores
2014-01-01
Full Text Available This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.
Sensitivity analysis by neural networks applied to power systems transient stability
Energy Technology Data Exchange (ETDEWEB)
Lotufo, Anna Diva P.; Lopes, Mara Lucia M.; Minussi, Carlos R. [Departamento de Engenharia Eletrica, UNESP, Campus de Ilha Solteira, Av. Brasil, 56, 15385-000 Ilha Solteira, SP (Brazil)
2007-05-15
This work presents a procedure for transient stability analysis and preventive control of electric power systems, which is formulated by a multilayer feedforward neural network. The neural network training is realized by using the back-propagation algorithm with fuzzy controller and adaptation of the inclination and translation parameters of the nonlinear function. These procedures provide a faster convergence and more precise results, if compared to the traditional back-propagation algorithm. The adaptation of the training rate is effectuated by using the information of the global error and global error variation. After finishing the training, the neural network is capable of estimating the security margin and the sensitivity analysis. Considering this information, it is possible to develop a method for the realization of the security correction (preventive control) for levels considered appropriate to the system, based on generation reallocation and load shedding. An application for a multimachine power system is presented to illustrate the proposed methodology. (author)
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver. PMID:26340789
Lesaffre, Emmanuel
2012-01-01
The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd
RBF-Type Artificial Neural Network Model Applied in Alloy Design of Steels
Institute of Scientific and Technical Information of China (English)
YOU Wei; LIU Ya-xiu; BAI Bing-zhe; FANG Hong-sheng
2008-01-01
RBF model, a new type of artificial neural network model was developed to design the content of carbon in low-alloy engineering steels. The errors of the ANN model are. MSE 0. 052 1, MSRE 17. 85%, and VOF 1. 932 9. The results obtained are satisfactory. The method is a powerful aid for designing new steels.
Directory of Open Access Journals (Sweden)
Jonas eKaplan
2015-03-01
Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
A NOVEL METHODOLOGY FOR CONSTRUCTING RULE-BASED NAÏVE BAYESIAN CLASSIFIERS
Directory of Open Access Journals (Sweden)
Abdallah Alashqur
2015-02-01
Full Text Available Classification is an important data mining technique that is used by many applications. Several types of classifiers have been described in the research literature. Example classifiers are decision tree classifiers, rule-based classifiers, and neural networks classifiers. Another popular classification technique is naïve Bayesian classification. Naïve Bayesian classification is a probabilistic classification approach that uses Bayesian Theorem to predict the classes of unclassified records. A drawback of Naïve Bayesian Classification is that every time a new data record is to be classified, the entire dataset needs to be scanned in order to apply a set of equations that perform the classification. Scanning the dataset is normally a very costly step especially if the dataset is very large. To alleviate this problem, a new approach for using naïve Bayesian classification is introduced in this study. In this approach, a set of classification rules is constructed on top of naïve Bayesian classifier. Hence we call this approach Rule-based Naïve Bayesian Classifier (RNBC. In RNBC, the dataset is canned only once, off-line, at the time of building the classification rule set. Subsequent scanning of the dataset, is avoided. Furthermore, this study introduces a simple three-step methodology for constructing the classification rule set.
Draper, D.
2001-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
Bayesian methods for proteomic biomarker development
Directory of Open Access Journals (Sweden)
Belinda Hernández
2015-12-01
In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.
Fuzzy Optimization of an Elevator Mechanism Applying the Genetic Algorithm and Neural Networks
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
Considering the indefinite character of the value of design parameters and being satisfied with load-bearing capacity and stiffness, the fuzzy optimization mathematical model is set up to minimize the volume of tooth corona of a worm gear in an elevator mechanism. The method of second-class comprehensive evaluation was used based on the optimal level cut set, thus the optimal level value of every fuzzy constraint can be attained; the fuzzy optimization is transformed into the usual optimization.The Fast Back Propagation of the neural networks algorithm are adopted to train feed-forward networks so as to fit a relative coefficient. Then the fitness function with penalty terms is built by a penalty strategy, a neural networks program is recalled, and solver functions of the Genetic Algorithm Toolbox of Matlab software are adopted to solve the optimization model.
Willingness to purchase Genetically Modified food: an analysis applying artificial Neural Networks
Salazar-Ordóñez, M.; Rodríguez-Entrena, M.; Becerra-Alonso, D.
2014-01-01
Findings about consumer decision-making process regarding GM food purchase remain mixed and are inconclusive. This paper offers a model which classifies willingness to purchase GM food, using data from 399 surveys in Southern Spain. Willingness to purchase has been measured using three dichotomous questions and classification, based on attitudinal, cognitive and socio-demographic factors, has been made by an artificial neural network model. The results show 74% accuracy to forecast the willin...
Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels
Directory of Open Access Journals (Sweden)
Petkov T.
2016-03-01
Full Text Available A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC and Gas Chromatography - mass spectrometry (GC-MS were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, “unknown” were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3 and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety “unknown” samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.
Upon the opportunity to apply ART2 Neural Network for clusterization of biodiesel fuels
Petkov, T.; Mustafa, Z.; Sotirov, S.; Milina, R.; Moskovkina, M.
2016-03-01
A chemometric approach using artificial neural network for clusterization of biodiesels was developed. It is based on artificial ART2 neural network. Gas chromatography (GC) and Gas Chromatography - mass spectrometry (GC-MS) were used for quantitative and qualitative analysis of biodiesels, produced from different feedstocks, and FAME (fatty acid methyl esters) profiles were determined. Totally 96 analytical results for 7 different classes of biofuel plants: sunflower, rapeseed, corn, soybean, palm, peanut, "unknown" were used as objects. The analysis of biodiesels showed the content of five major FAME (C16:0, C18:0, C18:1, C18:2, C18:3) and those components were used like inputs in the model. After training with 6 samples, for which the origin was known, ANN was verified and tested with ninety "unknown" samples. The present research demonstrated the successful application of neural network for recognition of biodiesels according to their feedstock which give information upon their properties and handling.
DEFF Research Database (Denmark)
Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;
2015-01-01
A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimental...
Loredo, T J
2004-01-01
I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...
International Nuclear Information System (INIS)
In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing classification of the proton motion into two categories: transfer ‘occurred’ and transfer ‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In this paper, we use a new developed data mining and pattern recognition tool for automating, controlling, and drawing charts of the output data of an Empirical Valence Bond existing code. The study analyzes the need for pattern recognition in aqueous proton transfer processes and how the learning approach in error back propagation (multilayer perceptron algorithms) could be satisfactorily employed in the present case. We present a tool for pattern recognition and validate the code including a real physical case study. The results of applying the artificial neural networks methodology to crowd patterns based upon selected physical properties (e.g., temperature, density) show the abilities of the network to learn proton transfer patterns corresponding to properties of the aqueous environments, which is in turn proved to be fully compatible with previous proton transfer studies. (condensed matter: structural, mechanical, and thermal properties)
Bayesian modeling using WinBUGS
Ntzoufras, Ioannis
2009-01-01
A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...
Directory of Open Access Journals (Sweden)
Garl
2016-07-01
Full Text Available Eric L Garland,1,2 Matthew O Howard,3 Sarah E Priddy,1 Patrick A McConnell,4 Michael R Riquino,1 Brett Froeliger4 1College of Social Work, 2Hunstsman Cancer Institute, University of Utah, Salt Lake City, UT, USA; 3School of Social Work, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; 4Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USA Abstract: Dual-process models from neuroscience suggest that addiction is driven by dysregulated interactions between bottom-up neural processes underpinning reward learning and top-down neural functions subserving executive function. Over time, drug use causes atrophy in prefrontally mediated cognitive control networks and hijacks striatal circuits devoted to processing natural rewards in service of compulsive seeking of drug-related reward. In essence, mindfulness-based interventions (MBIs can be conceptualized as mental training programs for exercising, strengthening, and remediating these functional brain networks. This review describes how MBIs may remediate addiction by regulating frontostriatal circuits, thereby restoring an adaptive balance between these top-down and bottom-up processes. Empirical evidence is presented suggesting that MBIs facilitate cognitive control over drug-related automaticity, attentional bias, and drug cue reactivity, while enhancing responsiveness to natural rewards. Findings from the literature are incorporated into an integrative account of the neural mechanisms of mindfulness-based therapies for effecting positive behavior change in the context of addiction recovery. Implications of our theoretical framework are presented with respect to how these insights can inform the addiction therapy process. Keywords: mindfulness, frontostriatal, savoring, cue reactivity, hedonic dysregulation, reward, addiction
Institute of Scientific and Technical Information of China (English)
WANG Xin-yu; WU Rui-min; FENG Chun-hua
2004-01-01
According to the typical engineering samples, a neural net work model with genetic algorithm to optimize weight values is put forward to forecast the productivities and efficiencies of mining faces. By this model we can obtain the possible achievements of available equipment combinations under certain geological situations of fully-mechanized coal mining faces. Then theory of fuzzy selection is applied to evaluate the performance of each equipment combination. By detailed empirical analysis, this model integrates the functions of forecasting mining faces' achievements and selecting optimal equipment combination and is helpful to the decision of equipment combination for fully-mechanized coal mining.
Imaging regenerating bone tissue based on neural networks applied to micro-diffraction measurements
Energy Technology Data Exchange (ETDEWEB)
Campi, G.; Pezzotti, G. [Institute of Crystallography, CNR, via Salaria Km 29.300, I-00015, Monterotondo Roma (Italy); Fratini, M. [Centro Fermi -Museo Storico della Fisica e Centro Studi e Ricerche ' Enrico Fermi' , Roma (Italy); Ricci, A. [Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, D-22607 Hamburg (Germany); Burghammer, M. [European Synchrotron Radiation Facility, B. P. 220, F-38043 Grenoble Cedex (France); Cancedda, R.; Mastrogiacomo, M. [Istituto Nazionale per la Ricerca sul Cancro, and Dipartimento di Medicina Sperimentale dell' Università di Genova and AUO San Martino Istituto Nazionale per la Ricerca sul Cancro, Largo R. Benzi 10, 16132, Genova (Italy); Bukreeva, I.; Cedola, A. [Institute for Chemical and Physical Process, CNR, c/o Physics Dep. at Sapienza University, P-le A. Moro 5, 00185, Roma (Italy)
2013-12-16
We monitored bone regeneration in a tissue engineering approach. To visualize and understand the structural evolution, the samples have been measured by X-ray micro-diffraction. We find that bone tissue regeneration proceeds through a multi-step mechanism, each step providing a specific diffraction signal. The large amount of data have been classified according to their structure and associated to the process they came from combining Neural Networks algorithms with least square pattern analysis. In this way, we obtain spatial maps of the different components of the tissues visualizing the complex kinetic at the base of the bone regeneration.
Applying Hopfield neural network to QoS routing in communication network
Institute of Scientific and Technical Information of China (English)
WANG Li; SHEN Jin-yuan; CHANG Sheng-jiang; ZHANG Yan-xin
2005-01-01
The main goal of routing solutions is to satisfy the requirements of the Quality of Service (QoS) for every admitted connection as well as to achieve a global efficiency in resource utilization.In this paper proposes a solution based on Hopfield neural network (HNN) to deal with one of representative routing problems in uni-cast routing,i.e.the multi-constrained(MC) routing problem.Computer simulation shows that we can obtain the optimal path very rapidly with our new Lyapunov energy functions.
Directory of Open Access Journals (Sweden)
Sachin P. Yadav
2013-01-01
Full Text Available The learning problems have to be concerned about distributed input data, because of gradual expansion of distributed computing environment. It is important to address the privacy concern of each data holder by extending the privacy preservation concept to original learning algorithms, to enhance co-operations in learning. In this project, focus is on protecting the privacy in significant learning model i.e. Multilayer Back Propagation Neural Network using Gradient Descent Methods. For protecting the privacy of the data items (concentration is towards Vertically Partitioned Data and Horizontally Partitioned Data, semi honest model and underlying security of El Gamal Scheme is referred [7].
Meier, U.
2008-01-01
We present a neural network approach to invert surface wave data for discontinuities and velocity structure in the upper mantle. We show how such a neural network can be trained on a set of random samples to give a continuous approximation to the inverse relation in a compact and computationally eff
Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee
2016-02-01
The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.
Directory of Open Access Journals (Sweden)
Saro Lee
2016-02-01
Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.
Energy Technology Data Exchange (ETDEWEB)
Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)
2006-07-01
An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Directory of Open Access Journals (Sweden)
Braz Calderano Filho
2014-12-01
Full Text Available Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+ sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13. Excluding the variable profile curvature (set 12, overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.
Atzori, Manfredo; Cognolato, Matteo; Müller, Henning
2016-01-01
Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140
Connectivity strategies for higher-order neural networks applied to pattern recognition
Spirkovska, Lilly; Reid, Max B.
1990-01-01
Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.
Institute of Scientific and Technical Information of China (English)
李丽华; 丁香乾; 贺英; 王伟
2012-01-01
The appraisal of aroma types of tobacco usually depends on olfaction, the veracity of its result is sometimes hard to be guaranteed. In view of this, sensory evaluation models have been constructed at home and abroad by using BP neural network or other methods, but they are inefficient in recognition. According to the relationship between chemical composition and the aroma types of tobacco, the recognition model of tobacco aroma types has been constructed by using Tabu search-based Bayesian network. Experimental results showed that it can attain a better Bayesian network structure, and has higher training efficiency and better accuracy in classification compared with BP neural network or other methods.%烟叶香型通常是靠人的嗅觉评定的,评定结果的准确性往往难以保证.针对该问题,国内外建立了BP神经网络等感官评估模型,但识别效率不高.根据烟叶中化学成分与烟叶香型关系,使用基于Tabu搜索的贝叶斯网络建立烟叶香型识别模型.实验结果表明,使用该方法能得到较好的贝叶斯网络结构,与BP神经网络等方法相比训练效率更高,分类的结果也更加准确.
Loredo, Thomas J.
2004-04-01
I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.
Applying Neural-Symbolic Cognitive Agents in Intelligent Transport Systems to reduce CO2 emissions
De Penning, Leo; D'Avila Garcez, Artur S.; Lamb, Luis C.; Stuiver, Arjan; Meyer, John Jules Ch
2014-01-01
Providing personalized feedback in Intelligent Transport Systems is a powerful tool for instigating a change in driving behaviour and the reduction of CO2 emissions. This requires a system that is capable of detecting driver characteristics from real-time vehicle data. In this paper, we apply the ar
Plug & Play object oriented Bayesian networks
DEFF Research Database (Denmark)
Bangsø, Olav; Flores, J.; Jensen, Finn Verner
2003-01-01
Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...
Bayesian Generalized Rating Curves
Helgi Sigurðarson 1985
2014-01-01
A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...
Visual masking with frontally applied pre-stimulus TMS and its subject-specific neural correlates.
Rutiku, Renate; Tulver, Kadi; Aru, Jaan; Bachmann, Talis
2016-07-01
The visibility of a visual target stimulus depends on the local state of the early visual cortex shortly before the stimulus itself is presented. This view is supported by the observation that occipitally applied pre-stimulus TMS can disrupt subsequent information processing leading to visual masking effects. According to another line of accumulating evidence, however, global pre-stimulus connectivity patterns could be as crucial as local cortical states. In line with the latter view we show that pre-stimulus masking occurs even if TMS is directed to the frontal cortex. Importantly, the individual extent of this effect is strongly correlated with the subject-specific peak latency of a late positive TMS-evoked potential. Our results thus suggest a third type of masking occurring neither through direct interaction with visual areas nor by a modal visual masking input. Our results also shed light on the inter-individual differences in TMS research in general.
Directory of Open Access Journals (Sweden)
F. Anctil
2004-01-01
Full Text Available Since the 1990s, neural networks have been applied to many studies in hydrology and water resources. Extensive reviews on neural network modelling have identified the major issues affecting modelling performance; one of the most important is generalisation, which refers to building models that can infer the behaviour of the system under study for conditions represented not only in the data employed for training and testing but also for those conditions not present in the data sets but inherent to the system. This work compares five generalisation approaches: stop training, Bayesian regularisation, stacking, bagging and boosting. All have been tested with neural networks in various scientific domains; stop training and stacking having been applied regularly in hydrology and water resources for some years, while Bayesian regularisation, bagging and boosting have been less common. The comparison is applied to streamflow modelling with multi-layer perceptron neural networks and the Levenberg-Marquardt algorithm as training procedure. Six catchments, with diverse hydrological behaviours, are employed as test cases to draw general conclusions and guidelines on the use of the generalisation techniques for practitioners in hydrology and water resources. All generalisation approaches provide improved performance compared with standard neural networks without generalisation. Stacking, bagging and boosting, which affect the construction of training sets, provide the best improvement from standard models, compared with stop-training and Bayesian regularisation, which regulate the training algorithm. Stacking performs better than the others although the benefit in performance is slight compared with bagging and boosting; furthermore, it is not consistent from one catchment to another. For a good combination of improvement and stability in modelling performance, the joint use of stop training or Bayesian regularisation with either bagging or boosting is
Held, Ulrike; Brunner, Florian; Steurer, Johann; Wertli, Maria M
2015-11-01
There is conflicting evidence about the accuracy of bone scintigraphy (BS) for the diagnosis of complex regional pain syndrome 1 (CRPS 1). In a meta-analysis of diagnostic studies, the evaluation of test accuracy is impeded by the use of different imperfect reference tests. The aim of our study is to summarize sensitivity and specificity of BS for CRPS 1 and to identify factors to explain heterogeneity. We use a hierarchical Bayesian approach to model test accuracy and threshold, and we present different models accounting for the imperfect nature of the reference tests, and assuming conditional dependence between BS and the reference test results. Further, we include disease duration as explanatory variable in the model. The models are compared using summary ROC curves and the deviance information criterion (DIC). Our results show that those models which account for different imperfect reference tests with conditional dependence and inclusion of the covariate are the ones with the smallest DIC. The sensitivity of BS was 0.87 (95% credible interval 0.73-0.97) and the overall specificity was 0.87 (0.73-0.95) in the model with the smallest DIC, in which missing values of the covariate are imputed within the Bayesian framework. The estimated effect of duration of symptoms on the threshold parameter was 0.17 (-0.25 to 0.57). We demonstrate that the Bayesian models presented in this paper are useful to address typical problems occurring in meta-analysis of diagnostic studies, including conditional dependence between index test and reference test, as well as missing values in the study-specific covariates. PMID:26479506
Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel
2013-01-01
Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean
Energy Technology Data Exchange (ETDEWEB)
Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)
2010-08-15
This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)
12th Brazilian Meeting on Bayesian Statistics
Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo
2015-01-01
Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...
The Bayesian Revolution Approaches Psychological Development
Shultz, Thomas R.
2007-01-01
This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…
Bayesian theory and applications
Dellaportas, Petros; Polson, Nicholas G; Stephens, David A
2013-01-01
The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...
Bernhard, Jonah E.; Moreland, J. Scott; Bass, Steffen A.; Liu, Jia; Heinz, Ulrich
2016-08-01
We quantitatively estimate properties of the quark-gluon plasma created in ultrarelativistic heavy-ion collisions utilizing Bayesian statistics and a multiparameter model-to-data comparison. The study is performed using a recently developed parametric initial condition model, TRENTo, which interpolates among a general class of particle production schemes, and a modern hybrid model which couples viscous hydrodynamics to a hadronic cascade. We calibrate the model to multiplicity, transverse momentum, and flow data and report constraints on the parametrized initial conditions and the temperature-dependent transport coefficients of the quark-gluon plasma. We show that initial entropy deposition is consistent with a saturation-based picture, extract a relation between the minimum value and slope of the temperature-dependent specific shear viscosity, and find a clear signal for a nonzero bulk viscosity.
Bernhard, Jonah E; Bass, Steffen A; Liu, Jia; Heinz, Ulrich
2016-01-01
We quantitatively estimate properties of the quark-gluon plasma created in ultra-relativistic heavy-ion collisions utilizing Bayesian statistics and a multi-parameter model-to-data comparison. The study is performed using a recently developed parametric initial condition model, TRENTO, which interpolates among a general class of particle production schemes, and a modern hybrid model which couples viscous hydrodynamics to a hadronic cascade. We calibrate the model to multiplicity, transverse momentum, and flow data and report constraints on the parametrized initial conditions and the temperature-dependent transport coefficients of the quark-gluon plasma. We show that initial entropy deposition is consistent with a saturation-based picture, extract a relation between the minimum value and slope of the temperature-dependent specific shear viscosity, and find a clear signal for a nonzero bulk viscosity.
朴素贝叶斯应用于自动化测试用例生成%Naive Bayesian Applied in Automatic Test Cases Generation
Institute of Scientific and Technical Information of China (English)
李欣; 张聪; 罗宪
2012-01-01
提出一种使用朴素贝叶斯作为核心算法来产生自动化测试用例的方法。该方法以实现自动化测试为目标,引入了朴素贝叶斯对产生的随机测试用例分类的思想。实验结果表明,这是一种可行的生成测试用例的方法。%Test cases generation was the key of automatic testing. Test cases generated great significance in software testing process. Automatic testing cases generated by as the core algorithm were presented in this paper. And the thoughts of classificatio in test case generation. The results showed the method presented in this paper was to generate test cases. effectively had Bayesian methods n were introduced a feasible method
Varying prior information in Bayesian inversion
Walker, Matthew; Curtis, Andrew
2014-06-01
Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions.
Institute of Scientific and Technical Information of China (English)
罗广恩; 崔维成
2012-01-01
Artificial neural network is an important method for predicting the fatigue crack growth rate. In this paper, the Bayesian regularized BP neural network is established to predict the fatigue crack growth rate of metal.The experimental data of each material at different stress ratio R are divided into two parts. One is used for training neural network, the other is used for testing the network. Experimental data of four different types of materials taken from literature were used in the analyses. The results show that the neural network has strong fitting and generalization capability. And the generalization capability of neural network is improved by reducing the training data near the threshold.So the neural network can be used for predicting the crack growth rate of different stress ratios R based on the existing data. Furthermore, it will provide a reliable and useful predictor for fatigue crack growth rate of different metals.%人工神经网络是进行预报裂纹扩展率的一个重要方法.文章针对不同金属的疲劳裂纹扩展速率分别建立贝叶斯正则化BP( Back Propagation)神经网络,将各材料在不同应力比R下的疲劳裂纹扩展速率试验数据分为两部分,一部分用来进行训练网络,另一部分用来测试训练好的网络,检验其泛化能力.将从文献中获取的4种不同金属材料的疲劳试验数据作为算例,来检验网络的性能.计算结果表明贝叶斯正则化BP神经网络不仅对训练样本有很好的拟合能力,而且对于未训练过的测试样本也有较好的预测能力,即有较强的泛化能力.同时,指出了建立网络时减少门槛值附近的试验样本点,可以提高网络的预测能力.研究结果表明,该方法可以方便地获得不同应力比R下的疲劳裂纹扩展速率,从而达到减少试验次数,充分利用已有数据的目的.并且可以进一步应用于其他金属的疲劳裂纹扩展速率的预报.
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research
Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G
2014-01-01
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t
Tahat, Amani; Martí Rabassa, Jordi; Khwaldeh, Ali; Tahat, Kaher
2014-01-01
In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing to classify the proton motion into two categories: transfer‘occurred’and transfer‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In t...
Bayesian Analysis of Experimental Data
Directory of Open Access Journals (Sweden)
Lalmohan Bhar
2013-10-01
Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.
Directory of Open Access Journals (Sweden)
Yu-Tzu Chang
2012-01-01
Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.
Tactile length contraction as Bayesian inference.
Tong, Jonathan; Ngo, Vy; Goldreich, Daniel
2016-08-01
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process. PMID:27121574
Tactile length contraction as Bayesian inference.
Tong, Jonathan; Ngo, Vy; Goldreich, Daniel
2016-08-01
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.
Łukasz Lechowicz; Wioletta Adamus-Białek; Wiesław Kaca
2013-01-01
Fimbriae are an important pathogenic factor of Escherichia coli during development of urinary tract infections. Here, we describe a new method for identification of Escherichia coli papG+ from papG- strains using the attenuated total reflectance Fourier transform infrared Spectroscopy (ATR FT-IR). We applied artificial neural networks to the analysis of the ATR FT-IR results. These methods allowed to discriminate E. coli papG+ from papG- strains with accuracy of 99%.
A Bayesian Nonparametric IRT Model
Karabatsos, George
2015-01-01
This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...
Kernel density compression for real-time Bayesian encoding/decoding of unsorted hippocampal spikes
Sodkomkham, Danaipat; Ciliberti, Davide; Wilson, Matthew A.; Fukui, Ken-ichi; Moriyama, Koichi; Numao, Masayuki; Kloosterman, Fabian
2015-01-01
To gain a better understanding of how neural ensembles communicate and process information, neural decoding algorithms are used to extract information encoded in their spiking activity. Bayesian decoding is one of the most used neural population decoding approaches to extract information from the ensemble spiking activity of rat hippocampal neurons. Recently it has been shown how Bayesian decoding can be implemented without the intermediate step of sorting spike waveforms into groups of singl...
Modeling operational risks of the nuclear industry with Bayesian networks
International Nuclear Information System (INIS)
Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)
Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B
2013-01-01
FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear
Yuan, Ying; MacKinnon, David P.
2009-01-01
This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...
Using consensus bayesian network to model the reactive oxygen species regulatory pathway.
Directory of Open Access Journals (Sweden)
Liangdong Hu
Full Text Available Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.
Bayesian Games with Intentions
Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael
2016-01-01
We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.
Fuzzy ARTMAP neural network for seafloor classification from multibeam sonar data
Institute of Scientific and Technical Information of China (English)
Zhou Xinghua; Chen Yongqi; Nick Emerson; Du Dewen
2006-01-01
This paper presents a seafloor classification method of multibeam sonar data, based on the use of Adaptive Resonance Theory (ART) neural networks. A general ART-based neural network, Fuzzy ARTMAP, has been proposed for seafloor classification of multibeam sonar data. An evolutionary strategy was used to generate new training samples near the cluster boundaries of the neural network, therefore the weights can be revised and refined by supervised learning. The proposed method resolves the training problem for Fuzzy ARTMAP neural networks, which are applied to seafloor classification of multibeam sonar data when there are less than adequate ground-truth samples. The results were synthetically analyzed in comparison with the standard Fuzzy ARTMAP network and a conventional Bayesian classifier.The conclusion can be drawn that Fuzzy ARTMAP neural networks combining with GA algorithms can be alternative powerful tools for seafloor classification of multibeam sonar data.
DEFF Research Database (Denmark)
Jensen, Finn Verner; Nielsen, Thomas Dyhre
2016-01-01
Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...... and edges. The nodes represent variables, which may be either discrete or continuous. An edge between two nodes A and B indicates a direct influence between the state of A and the state of B, which in some domains can also be interpreted as a causal relation. The wide-spread use of Bayesian networks...... is largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...
Wu, Qian; Yang, Yu-hong; Xu, Zhao-li; Jin, Yan; Guo, Yan; Lao, Cai-lian
2014-08-01
To establish the quantitative relationship between soil spectrum and the concentration of available nitrogen, phosphorus and potassium in soil, the critical procedures of a new analysis method were examined, involving spectral preprocessing, wavebands selection and adoption of regression methods. As a result, a soil spectral analysis model was built using VIS/NIRS bands, with multiplicative scatter correction and first-derivative for spectral preprocessing, and local nonlinear regression method (Local regression method of BP neural network). The coefficients of correlation between the chemically determined and the modeled available nitrogen, phosphorus and potassium for predicted samples were 0.90, 0.82 and 0.94, respectively. It is proved that the prediction of local regression method of BP neural network has better accuracy and stability than that of global regression methods. In addition, the estimation accuracy of soil available nitrogen, phosphorus and potassium was increased by 40.63%, 28.64% and 28.64%, respectively. Thus, the quantitative analysis model established by the local regression method of BP neural network could be used to estimate the concentration of available nitrogen, phosphorus and potassium rapidly. It is innovative for using local nonlinear method to improve the stability and reliability of the soil spectrum model for nutrient diagnosis, which provides technical support for dynamic monitoring and process control for the soil nutrient under different growth stages of field-growing crops.
Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope
2013-06-01
Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.
Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods
Zhu, Weixuan
2016-01-01
The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...
Bayesian Analysis of Multivariate Probit Models
Siddhartha Chib; Edward Greenberg
1996-01-01
This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...
Bayesian Classification in Medicine: The Transferability Question *
Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann
1981-01-01
Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...
Bayesian target tracking based on particle filter
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Elements of Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)
1997-09-01
We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Small sample Bayesian analyses in assessment of weapon performance
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Abundant test data are required in assessment of weapon performance.When weapon test data are insufficient,Bayesian analyses in small sample circumstance should be considered and the test data should be provided by simulations.The several Bayesian approaches are discussed and some limitations are founded.An improvement is put forward after limitations of Bayesian approaches available are analyzed and t he improved approach is applied to assessment of some new weapon performance.
The Bayesian Modelling Of Inflation Rate In Romania
Mihaela Simionescu
2014-01-01
Bayesian econometrics knew a considerable increase in popularity in the last years, joining the interests of various groups of researchers in economic sciences and additional ones as specialists in econometrics, commerce, industry, marketing, finance, micro-economy, macro-economy and other domains. The purpose of this research is to achieve an introduction in Bayesian approach applied in economics, starting with Bayes theorem. For the Bayesian linear regression models the methodology of estim...
A Fault Diagnosis Methodology for Gear Pump Based on EEMD and Bayesian Network.
Directory of Open Access Journals (Sweden)
Zengkai Liu
Full Text Available This paper proposes a fault diagnosis methodology for a gear pump based on the ensemble empirical mode decomposition (EEMD method and the Bayesian network. Essentially, the presented scheme is a multi-source information fusion based methodology. Compared with the conventional fault diagnosis with only EEMD, the proposed method is able to take advantage of all useful information besides sensor signals. The presented diagnostic Bayesian network consists of a fault layer, a fault feature layer and a multi-source information layer. Vibration signals from sensor measurement are decomposed by the EEMD method and the energy of intrinsic mode functions (IMFs are calculated as fault features. These features are added into the fault feature layer in the Bayesian network. The other sources of useful information are added to the information layer. The generalized three-layer Bayesian network can be developed by fully incorporating faults and fault symptoms as well as other useful information such as naked eye inspection and maintenance records. Therefore, diagnostic accuracy and capacity can be improved. The proposed methodology is applied to the fault diagnosis of a gear pump and the structure and parameters of the Bayesian network is established. Compared with artificial neural network and support vector machine classification algorithms, the proposed model has the best diagnostic performance when sensor data is used only. A case study has demonstrated that some information from human observation or system repair records is very helpful to the fault diagnosis. It is effective and efficient in diagnosing faults based on uncertain, incomplete information.
Bayesian calibration of car-following models
Van Hinsbergen, C.P.IJ.; Van Lint, H.W.C.; Hoogendoorn, S.P.; Van Zuylen, H.J.
2010-01-01
Recent research has revealed that there exist large inter-driver differences in car-following behavior such that different car-following models may apply to different drivers. This study applies Bayesian techniques to the calibration of car-following models, where prior distributions on each model p
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Bayesian Design Space applied to Pharmaceutical Development
Lebrun, Pierre
2012-01-01
Given the guidelines such as the Q8 document published by the International Conference on Harmonization (ICH), that describe the “Quality by Design” paradigm for the Pharmaceutical Development, the aim of this work is to provide a complete methodology addressing this problematic. As a result, various Design Spaces were obtained for different analytical methods and a manufacturing process. In Q8, Design Space has been defined as the “the multidimensional combination and interaction of input...
An introduction to Gaussian Bayesian networks.
Grzegorczyk, Marco
2010-01-01
The extraction of regulatory networks and pathways from postgenomic data is important for drug -discovery and development, as the extracted pathways reveal how genes or proteins regulate each other. Following up on the seminal paper of Friedman et al. (J Comput Biol 7:601-620, 2000), Bayesian networks have been widely applied as a popular tool to this end in systems biology research. Their popularity stems from the tractability of the marginal likelihood of the network structure, which is a consistent scoring scheme in the Bayesian context. This score is based on an integration over the entire parameter space, for which highly expensive computational procedures have to be applied when using more complex -models based on differential equations; for example, see (Bioinformatics 24:833-839, 2008). This chapter gives an introduction to reverse engineering regulatory networks and pathways with Gaussian Bayesian networks, that is Bayesian networks with the probabilistic BGe scoring metric [see (Geiger and Heckerman 235-243, 1995)]. In the BGe model, the data are assumed to stem from a Gaussian distribution and a normal-Wishart prior is assigned to the unknown parameters. Gaussian Bayesian network methodology for analysing static observational, static interventional as well as dynamic (observational) time series data will be described in detail in this chapter. Finally, we apply these Bayesian network inference methods (1) to observational and interventional flow cytometry (protein) data from the well-known RAF pathway to evaluate the global network reconstruction accuracy of Bayesian network inference and (2) to dynamic gene expression time series data of nine circadian genes in Arabidopsis thaliana to reverse engineer the unknown regulatory network topology for this domain. PMID:20824469
Bayesian missing data problems EM, data augmentation and noniterative computation
Tan, Ming T; Ng, Kai Wang
2009-01-01
Bayesian Missing Data Problems: EM, Data Augmentation and Noniterative Computation presents solutions to missing data problems through explicit or noniterative sampling calculation of Bayesian posteriors. The methods are based on the inverse Bayes formulae discovered by one of the author in 1995. Applying the Bayesian approach to important real-world problems, the authors focus on exact numerical solutions, a conditional sampling approach via data augmentation, and a noniterative sampling approach via EM-type algorithms. After introducing the missing data problems, Bayesian approach, and poste
Temporal Difference Learning for the Game Tic-Tac-Toe 3D: Applying Structure to Neural Networks
van de Steeg, Michiel; Drugan, Madalina; Wiering, Marco
2015-01-01
When reinforcement learning is applied to large state spaces, such as those occurring in playing board games, the use of a good function approximator to learn to approximate the value function is very important. In previous research, multilayer perceptrons have often been quite successfully used as
Bayesian item selection in constrained adaptive testing using shadow tests
Veldkamp, Bernard P.
2010-01-01
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specificati
Bayesian Compressed Sensing with Unknown Measurement Noise Level
DEFF Research Database (Denmark)
Hansen, Thomas Lundgaard; Jørgensen, Peter Bjørn; Pedersen, Niels Lovmand;
2013-01-01
In sparse Bayesian learning (SBL) approximate Bayesian inference is applied to find sparse estimates from observations corrupted by additive noise. Current literature only vaguely considers the case where the noise level is unknown a priori. We show that for most state-of-the-art reconstruction a...
Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests
Veldkamp, Bernard P.
2010-01-01
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…
Bayesian Learning and the Psychology of Rule Induction
Endress, Ansgar D.
2013-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to…
Directory of Open Access Journals (Sweden)
Raju Dara
2014-10-01
Full Text Available Many diversified applications do exist in science & technology, which make use of the primary theory of a recognition phenomenon as one of its solutions. Recognition scenario is incorporated with a set of decisions and the action according to the decision purely relies on the quality of extracted information on utmost applications. Thus, the quality decision making absolutely reckons on processing momentum and precision which are entirely coupled with recognition methodology. In this article, a latest rule is formulated based on the degree of correlation to characterize the generalized recognition constraint and the application is explored with respect to image based information extraction. Machine learning based perception called feed forward architecture of Artificial Neural Network has been applied to attain the expected eminence of elucidation. The proposed method furnishes extraordinary advantages such as less memory requirements, extremely high level security for storing data, exceptional speed and gentle implementation approach.
Bayesian Image Reconstruction Based on Voronoi Diagrams
Cabrera, G F; Hitschfeld, N
2007-01-01
We present a Bayesian Voronoi image reconstruction technique (VIR) for interferometric data. Bayesian analysis applied to the inverse problem allows us to derive the a-posteriori probability of a novel parameterization of interferometric images. We use a variable Voronoi diagram as our model in place of the usual fixed pixel grid. A quantization of the intensity field allows us to calculate the likelihood function and a-priori probabilities. The Voronoi image is optimized including the number of polygons as free parameters. We apply our algorithm to deconvolve simulated interferometric data. Residuals, restored images and chi^2 values are used to compare our reconstructions with fixed grid models. VIR has the advantage of modeling the image with few parameters, obtaining a better image from a Bayesian point of view.
Bayesian Modelling of fMRI Time Series
DEFF Research Database (Denmark)
Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward
2000-01-01
We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...
SOMBI: Bayesian identification of parameter relations in unstructured cosmological data
Frank, Philipp; Enßlin, Torsten A
2016-01-01
This work describes the implementation and application of a correlation determination method based on Self Organizing Maps and Bayesian Inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the Self Organizing Map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian Information Criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide ...
Neural Networks and Photometric Redshifts
Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo
2002-01-01
We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy...
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Noncausal Bayesian Vector Autoregression
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution...
Bayesian Lensing Shear Measurement
Bernstein, Gary M
2013-01-01
We derive an estimator of weak gravitational lensing shear from background galaxy images that avoids noise-induced biases through a rigorous Bayesian treatment of the measurement. The Bayesian formalism requires a prior describing the (noiseless) distribution of the target galaxy population over some parameter space; this prior can be constructed from low-noise images of a subsample of the target population, attainable from long integrations of a fraction of the survey field. We find two ways to combine this exact treatment of noise with rigorous treatment of the effects of the instrumental point-spread function and sampling. The Bayesian model fitting (BMF) method assigns a likelihood of the pixel data to galaxy models (e.g. Sersic ellipses), and requires the unlensed distribution of galaxies over the model parameters as a prior. The Bayesian Fourier domain (BFD) method compresses galaxies to a small set of weighted moments calculated after PSF correction in Fourier space. It requires the unlensed distributi...
Manger, R
1998-01-01
Holographic neural networks are a new and promising type of artificial neural networks. This article gives an overview of the holographic neural technology and its possibilities. The theoretical principles of holographic networks are first reviewed. Then, some other papers are presented, where holographic networks have been applied or experimentally evaluated. A case study dealing with currency exchange rate prediction is described in more detail.
Malicious Bayesian Congestion Games
Gairing, Martin
2008-01-01
In this paper, we introduce malicious Bayesian congestion games as an extension to congestion games where players might act in a malicious way. In such a game each player has two types. Either the player is a rational player seeking to minimize her own delay, or - with a certain probability - the player is malicious in which case her only goal is to disturb the other players as much as possible. We show that such games do in general not possess a Bayesian Nash equilibrium in pure strategies (i.e. a pure Bayesian Nash equilibrium). Moreover, given a game, we show that it is NP-complete to decide whether it admits a pure Bayesian Nash equilibrium. This result even holds when resource latency functions are linear, each player is malicious with the same probability, and all strategy sets consist of singleton sets. For a slightly more restricted class of malicious Bayesian congestion games, we provide easy checkable properties that are necessary and sufficient for the existence of a pure Bayesian Nash equilibrium....
A Bayesian Approach to Interactive Retrieval
Tague, Jean M.
1973-01-01
A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…
Perfect Bayesian equilibrium. Part II: epistemic foundations
Bonanno, Giacomo
2011-01-01
In a companion paper we introduced a general notion of perfect Bayesian equilibrium which can be applied to arbitrary extensive-form games. The essential ingredient of the proposed definition is the qualitative notion of AGM-consistency. In this paper we provide an epistemic foundation for AGM-consistency based on the AGM theory of belief revision.
Phase Transitions of Neural Networks
Kinzel, Wolfgang
1997-01-01
The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation.
Bayesian Optimization in High Dimensions via Random Embeddings
Z. Wang; M. Zoghi; F. Hutter; D. Matheson; N. de Freitas
2013-01-01
Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several
Bayesian Methods for Radiation Detection and Dosimetry
Groer, Peter G
2002-01-01
We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...
Dynamic Bayesian Combination of Multiple Imperfect Classifiers
Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon
2012-01-01
Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...
Dimensionality reduction in Bayesian estimation algorithms
Directory of Open Access Journals (Sweden)
G. W. Petty
2013-03-01
Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.
Bayesian Estimation of Thermonuclear Reaction Rates
Iliadis, Christian; Coc, Alain; Timmes, Frank; Starrfield, Sumner
2016-01-01
The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied in the past to this problem, all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extra-solar planets, gravitational waves, and type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present the first astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the d(p,$\\gamma$)$^3$He, $^3$He($^3$He,2p)$^4$He, and $^3$He($\\alpha$,$\\gamma$)$^7$Be reactions,...
Ojha, Maheswar; Maiti, Saumen
2016-03-01
A novel approach based on the concept of Bayesian neural network (BNN) has been implemented for classifying sediment boundaries using downhole log data obtained during Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. The Bayesian framework in conjunction with Markov Chain Monte Carlo (MCMC)/hybrid Monte Carlo (HMC) learning paradigm has been applied to constrain the lithology boundaries using density, density porosity, gamma ray, sonic P-wave velocity and electrical resistivity at the Hole U1344A. We have demonstrated the effectiveness of our supervised classification methodology by comparing our findings with a conventional neural network and a Bayesian neural network optimized by scaled conjugate gradient method (SCG), and tested the robustness of the algorithm in the presence of red noise in the data. The Bayesian results based on the HMC algorithm (BNN.HMC) resolve detailed finer structures at certain depths in addition to main lithology such as silty clay, diatom clayey silt and sandy silt. Our method also recovers the lithology information from a depth ranging between 615 and 655 m Wireline log Matched depth below Sea Floor of no core recovery zone. Our analyses demonstrate that the BNN based approach renders robust means for the classification of complex lithology successions at the Hole U1344A, which could be very useful for other studies and understanding the oceanic crustal inhomogeneity and structural discontinuities.
Computational statistics using the bBayesian Inference Engine
Weinberg, Martin D
2012-01-01
This paper introduces the Bayesian Inference Engine (BIE), a general parallel-optimised software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organise and reuse expensive derived data. I describe key concepts that illustrate the power of Bayesian inference to address these needs and outline the computational challenge. The techniques presented are based on experience gained in modelling star-counts and stellar populations, analysing the morphology of galaxy images, and performing Bayesian investigations of semi-analytic models of galaxy formation. These inference problems require advanced Markov chain Monte Carlo (MCMC) algorithms that expedite sampling, mixing, and the analysis of the Bayesian posterior distribution. The BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. By providing a variety of statistical algorithms for all phases of the inference problem, a u...
Hybrid Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2012-01-01
Bayesian Optimization aims at optimizing an unknown non-convex/concave function that is costly to evaluate. We are interested in application scenarios where concurrent function evaluations are possible. Under such a setting, BO could choose to either sequentially evaluate the function, one input at a time and wait for the output of the function before making the next selection, or evaluate the function at a batch of multiple inputs at once. These two different settings are commonly referred to as the sequential and batch settings of Bayesian Optimization. In general, the sequential setting leads to better optimization performance as each function evaluation is selected with more information, whereas the batch setting has an advantage in terms of the total experimental time (the number of iterations). In this work, our goal is to combine the strength of both settings. Specifically, we systematically analyze Bayesian optimization using Gaussian process as the posterior estimator and provide a hybrid algorithm t...
Energy Technology Data Exchange (ETDEWEB)
Ritter, G.X.; Sussner, P. [Univ. of Florida, Gainesville, FL (United States)
1996-12-31
The theory of artificial neural networks has been successfully applied to a wide variety of pattern recognition problems. In this theory, the first step in computing the next state of a neuron or in performing the next layer neural network computation involves the linear operation of multiplying neural values by their synaptic strengths and adding the results. Thresholding usually follows the linear operation in order to provide for nonlinearity of the network. In this paper we introduce a novel class of neural networks, called morphological neural networks, in which the operations of multiplication and addition are replaced by addition and maximum (or minimum), respectively. By taking the maximum (or minimum) of sums instead of the sum of products, morphological network computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we consider some of these differences and provide some particular examples of morphological neural network.
Bayesian multiple target tracking
Streit, Roy L
2013-01-01
This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements
Bayesian and frequentist inequality tests
David M. Kaplan; Zhuo, Longhao
2016-01-01
Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data.
BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.
Khakabimamaghani, Sahand; Ester, Martin
2016-01-01
The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199
Bayesian logistic regression analysis
Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.
2012-01-01
In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an
Bayesian Independent Component Analysis
DEFF Research Database (Denmark)
Winther, Ole; Petersen, Kaare Brandt
2007-01-01
In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...
DEFF Research Database (Denmark)
Hartelius, Karsten; Carstensen, Jens Michael
2003-01-01
A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...
Neural Induction, Neural Fate Stabilization, and Neural Stem Cells
Directory of Open Access Journals (Sweden)
Sally A. Moody
2002-01-01
Full Text Available The promise of stem cell therapy is expected to greatly benefit the treatment of neurodegenerative diseases. An underlying biological reason for the progressive functional losses associated with these diseases is the extremely low natural rate of self-repair in the nervous system. Although the mature CNS harbors a limited number of self-renewing stem cells, these make a significant contribution to only a few areas of brain. Therefore, it is particularly important to understand how to manipulate embryonic stem cells and adult neural stem cells so their descendants can repopulate and functionally repair damaged brain regions. A large knowledge base has been gathered about the normal processes of neural development. The time has come for this information to be applied to the problems of obtaining sufficient, neurally committed stem cells for clinical use. In this article we review the process of neural induction, by which the embryonic ectodermal cells are directed to form the neural plate, and the process of neural�fate stabilization, by which neural plate cells expand in number and consolidate their neural fate. We will present the current knowledge of the transcription factors and signaling molecules that are known to be involved in these processes. We will discuss how these factors may be relevant to manipulating embryonic stem cells to express a neural fate and to produce large numbers of neurally committed, yet undifferentiated, stem cells for transplantation therapies.
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
Bayesian networks in educational assessment
Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M
2015-01-01
Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...
A Bayesian nonlinear mixed-effects disease progression model
Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith
2015-01-01
A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation meth...
Bayesian item selection in constrained adaptive testing using shadow tests
Bernard P. Veldkamp
2010-01-01
Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approac...
Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology
Hortúa, Héctor J
2014-01-01
These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.
Kernel Approximate Bayesian Computation for Population Genetic Inferences
Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei
2012-01-01
Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate po...
Bayesian calibration for forensic age estimation.
Ferrante, Luigi; Skrami, Edlira; Gesuita, Rosaria; Cameriere, Roberto
2015-05-10
Forensic medicine is increasingly called upon to assess the age of individuals. Forensic age estimation is mostly required in relation to illegal immigration and identification of bodies or skeletal remains. A variety of age estimation methods are based on dental samples and use of regression models, where the age of an individual is predicted by morphological tooth changes that take place over time. From the medico-legal point of view, regression models, with age as the dependent random variable entail that age tends to be overestimated in the young and underestimated in the old. To overcome this bias, we describe a new full Bayesian calibration method (asymmetric Laplace Bayesian calibration) for forensic age estimation that uses asymmetric Laplace distribution as the probability model. The method was compared with three existing approaches (two Bayesian and a classical method) using simulated data. Although its accuracy was comparable with that of the other methods, the asymmetric Laplace Bayesian calibration appears to be significantly more reliable and robust in case of misspecification of the probability model. The proposed method was also applied to a real dataset of values of the pulp chamber of the right lower premolar measured on x-ray scans of individuals of known age. PMID:25645903
Directory of Open Access Journals (Sweden)
Gang Li
2015-10-01
Full Text Available As a novel recurrent neural network (RNN, an echo state network (ESN that utilizes a reservoir with many randomly connected internal units and only trains the readout, avoids increased complexity of training procedures faced by traditional RNN. The ESN can cope with complex nonlinear systems because of its dynamical properties and has been applied in hydrological forecasting and load forecasting. Due to the linear regression algorithm usually adopted by generic ESN to train the output weights, an ill-conditioned solution might occur, degrading the generalization ability of the ESN. In this study, the ESN with Bayesian regularization (BESN is proposed for short-term power production forecasting of small hydropower (SHP plants. According to the Bayesian theory, the weights distribution in space is considered and the optimal output weights are obtained by maximizing the posterior probabilistic distribution. The evidence procedure is employed to gain optimal hyperparameters for the BESN model. The recorded data obtained from the SHP plants in two different counties, located in Yunnan Province, China, are utilized to validate the proposed model. For comparison, the feed-forward neural networks with Levenberg-Marquardt algorithm (LM-FNN and the generic ESN are also employed. The results indicate that BESN outperforms both LM-FNN and ESN.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel N
2012-01-01
Many networks of scientific interest naturally decompose into clusters or communities with comparatively fewer external than internal links; however, current Bayesian models of network communities do not exert this intuitive notion of communities. We formulate a nonparametric Bayesian model...... for community detection consistent with an intuitive definition of communities and present a Markov chain Monte Carlo procedure for inferring the community structure. A Matlab toolbox with the proposed inference procedure is available for download. On synthetic and real networks, our model detects communities...... consistent with ground truth, and on real networks, it outperforms existing approaches in predicting missing links. This suggests that community structure is an important structural property of networks that should be explicitly modeled....
Ildikó Ungvári; Gábor Hullám; Péter Antal; Petra Sz Kiszel; András Gézsi; Éva Hadadi; Viktor Virág; Gergely Hajós; András Millinghoffer; Adrienne Nagy; András Kiss; Semsei, Ágnes F.; Gergely Temesi; Béla Melegh; Péter Kisfali
2012-01-01
Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called bayesian network based bayesian multilevel analysis of relevance (BN-BMLA). Th...
Improving Environmental Scanning Systems Using Bayesian Networks
Simon Welter; Jörg H. Mayer; Reiner Quick
2013-01-01
As companies’ environment is becoming increasingly volatile, scanning systems gain in importance. We propose a hybrid process model for such systems' information gathering and interpretation tasks that combines quantitative information derived from regression analyses and qualitative knowledge from expert interviews. For the latter, we apply Bayesian networks. We derive the need for such a hybrid process model from a literature review. We lay out our model to find a suitable set of business e...
Market Segmentation Using Bayesian Model Based Clustering
Van Hattum, P.
2009-01-01
This dissertation deals with two basic problems in marketing, that are market segmentation, which is the grouping of persons who share common aspects, and market targeting, which is focusing your marketing efforts on one or more attractive market segments. For the grouping of persons who share common aspects a Bayesian model based clustering approach is proposed such that it can be applied to data sets that are specifically used for market segmentation. The cluster algorithm can handle very l...
Brody, Samuel; Lapata, Mirella
2009-01-01
Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...
Efficient Bayesian Phase Estimation
Wiebe, Nathan; Granade, Chris
2016-07-01
We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.
Energy Technology Data Exchange (ETDEWEB)
Sha, W. [Metals Research Group, School of Planning, Architecture and Civil Engineering, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom)
2007-02-15
A paper has been published in Applied Thermal Engineering, using feed-forward artificial neural network (ANN) in the modeling of heated catalytic converter performance. The present paper attempts to discuss and comment on the paper. The amount of data used in the paper are not enough to determine the number of fitting parameters in the network. Therefore, the model is not mathematically sound or justified. The conclusion is that ANN modeling should be used with care and enough data. (author)
Bayesian decoding using unsorted spikes in the rat hippocampus
Kloosterman, Fabian; Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A
2013-01-01
A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametr...
Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory
2016-04-01
Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.
Bayesian data analysis in population ecology: motivations, methods, and benefits
Dorazio, Robert
2016-01-01
During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.
Energy Technology Data Exchange (ETDEWEB)
Pulgati, Fernando H. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Modelagem de Bacias (LAB2M); Zouain, Ricardo N.A. [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Inst. de Geociencias. Centro de Estudos de Geologia Costeira e Oceanica; Fachel, Jandyra M.G. [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Inst. de Matematica; Landau, Luiz [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Metodos Computacionais em Engenharia (LAMCE)
2004-07-01
Controlling and monitoring environmental researches have accompanied the development of offshore exploration drill activities aimed at finding oil and gas reserves, as there has been an increase in the environmental demands and restrictions. Three stages of the drilling process were isolated and the effects of different fluids were measured using Bayesian spatial models. The probable impact of the use of non-aqueous fluid (NAF) was measured through changes observed in sea sediments in three different occasions: previous to the activity, one (1) month after the end of the activity, and one (1) year after the end of the activity. BACI (Before-After Control Impact) design, which allows the control of temporal and spatial variation components, was chosen. (author)
Mizera, Mikołaj; Talaczyńska, Alicja; Zalewski, Przemysław; Skibiński, Robert; Cielecka-Piontek, Judyta
2015-05-01
A sensitive and fast HPLC method using ultraviolet diode-array detector (DAD)/electrospray ionization tandem mass spectrometry (Q-TOF-MS/MS) was developed for the determination of tebipenem pivoxyl and in the presence of degradation products formed during thermolysis. The chromatographic separations were performed on stationary phases produced in core-shell technology with particle diameter of 5.0 µm. The mobile phases consisted of formic acid (0.1%) and acetonitrile at different ratios. The flow rate was 0.8 mL/min while the wavelength was set at 331 nm. The stability characteristics of tebipenem pivoxyl were studied by performing stress tests in the solid state in dry air (RH=0%) and at an increased relative air humidity (RH=90%). The validation parameters such as selectivity, accuracy, precision and sensitivity were found to be satisfying. The satisfied selectivity and precision of determination were obtained for the separation of tebipenem pivoxyl from its degradation products using a stationary phase with 5.0 µm particles. The evaluation of the chemical structure of the 9 degradation products of tebipenem pivoxyl was conducted following separation based on the stationary phase with a 5.0 µm particle size by applying a Q-TOF-MS/MS detector. The main degradation products of tebipenem pivoxyl were identified: a product resulting from the condensation of the substituents of 1-(4,5-dihydro-1,3-thiazol-2-yl)-3-azetidinyl]sulfanyl and acid and ester forms of tebipenem with an open β-lactam ring in dry air at an increased temperature (RH=0%, T=393 K) as well as acid and ester forms of tebipenem with an open β-lactam ring at an increased relative air humidity and an elevated temperature (RH=90%, T=333 K). Retention times of tebipenem pivoxyl and its degradation products were used as training data set for predictive model of quantitative structure-retention relationship. An artificial neural network with adaptation protocol and extensive feature selection process
Computational statistics using the Bayesian Inference Engine
Weinberg, Martin D.
2013-09-01
This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.
Bayesian optimization for materials design
Frazier, Peter I.; Wang, Jialei
2015-01-01
We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...
Adaptive learning via selectionism and Bayesianism, Part II: the sequential case.
Zhang, Jun
2009-04-01
Animals increase or decrease their future tendency of emitting an action based on whether performing such action has, in the past, resulted in positive or negative reinforcement. An analysis in the companion paper [Zhang, J. (2009). Adaptive learning via selectionism and Bayesianism. Part I: Connection between the two. Neural Networks, 22(3), 220-228] of such selectionist style of learning reveals a resemblance between its ensemble-level dynamics governing the change of action probability and Bayesian learning where evidence (in this case, reward) is distributively applied to all action alternatives. Here, this equivalence is further explored in solving the temporal credit-assignment problem during the learning of an action sequence ("operant chain"). Naturally emerging are the notion of secondary (conditioned) reinforcement predicting the average reward associated with a stimulus, and the notion of actor-critic architecture involving concurrent learning of both action probability and reward prediction. While both are consistent with solutions provided by contemporary reinforcement learning theory (Sutton & Barto, 1998) for optimizing sequential decision-making under stationary Markov environments, we investigate the effect of action learning on reward prediction when both are carried out concurrently in any on-line scheme. PMID:19395235
Bayesian Posteriors Without Bayes' Theorem
Hill, Theodore P
2012-01-01
The classical Bayesian posterior arises naturally as the unique solution of several different optimization problems, without the necessity of interpreting data as conditional probabilities and then using Bayes' Theorem. For example, the classical Bayesian posterior is the unique posterior that minimizes the loss of Shannon information in combining the prior and the likelihood distributions. These results, direct corollaries of recent results about conflations of probability distributions, reinforce the use of Bayesian posteriors, and may help partially reconcile some of the differences between classical and Bayesian statistics.
Editorial: Bayesian benefits for child psychology and psychiatry researchers.
Oldehinkel, Albertine J
2016-09-01
For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field. PMID:27535649
Multivariate discrimination technique based on the Bayesian theory
Institute of Scientific and Technical Information of China (English)
JIN Ping; PAN Chang-zhou; XIAO Wei-guo
2007-01-01
A multivariate discrimination technique was established based on the Bayesian theory. Using this technique, P/S ratios of different types (e.g., Pn/Sn, Pn/Lg, Pg/Sn or Pg/Lg) measured within different frequency bands and from different stations were combined together to discriminate seismic events in Central Asia. Major advantages of the Bayesian approach are that the probability to be an explosion for any unknown event can be directly calculated given the measurements of a group of discriminants, and at the same time correlations among these discriminants can be fully taken into account. It was proved theoretically that the Bayesian technique would be optimal and its discriminating performance would be better than that of any individual discriminant as well as better than that yielded by the linear combination approach ignoring correlations among discriminants. This conclusion was also validated in this paper by applying the Bayesian approach to the above-mentioned observed data.
Bayesian multimodel inference for dose-response studies
Link, W.A.; Albers, P.H.
2007-01-01
Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.
Editorial: Bayesian benefits for child psychology and psychiatry researchers.
Oldehinkel, Albertine J
2016-09-01
For many scientists, performing statistical tests has become an almost automated routine. However, p-values are frequently used and interpreted incorrectly; and even when used appropriately, p-values tend to provide answers that do not match researchers' questions and hypotheses well. Bayesian statistics present an elegant and often more suitable alternative. The Bayesian approach has rarely been applied in child psychology and psychiatry research so far, but the development of user-friendly software packages and tutorials has placed it well within reach now. Because Bayesian analyses require a more refined definition of hypothesized probabilities of possible outcomes than the classical approach, going Bayesian may offer the additional benefit of sparkling the development and refinement of theoretical models in our field.
A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem
Jingpeng, Li
2008-01-01
A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurses assignment. Unlike our previous work that used Gas to implement implicit learning, the learning in the proposed algorithm is explicit, ie. Eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated, ie in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again usin...
When the world becomes 'too real': A Bayesian explanation of autistic perception
Pellicano, L.; Burr, D.
2012-01-01
Perceptual experience is influenced both by incoming sensory information and prior knowledge about the world, a concept recently formalised within Bayesian decision theory. We propose that Bayesian models can be applied to autism – a neurodevelopmental condition with atypicalities in sensation and perception – to pinpoint fundamental differences in perceptual mechanisms. We suggest specifically that attenuated Bayesian priors – ‘hypo-priors’ – may be responsible for the unique perceptual expe...
Institute of Scientific and Technical Information of China (English)
Xiu-yun PENG; Zai-zai YAN
2013-01-01
In this study,we consider the Bayesian estimation of unknown parameters and reliability function of the generalized exponential distribution based on progressive type-Ⅰ interval censoring.The Bayesian estimates of parameters and reliability function cannot be obtained as explicit forms by applying squared error loss and Linex loss functions,respectively; thus,we present the Lindley's approximation to discuss these estimations.Then,the Bayesian estimates are compared with the maximum likelihood estimates by using the Monte Carlo simulations.
Machine learning a Bayesian and optimization perspective
Theodoridis, Sergios
2015-01-01
This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches, which rely on optimization techniques, as well as Bayesian inference, which is based on a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as shor...
Bayesian image reconstruction: Application to emission tomography
Energy Technology Data Exchange (ETDEWEB)
Nunez, J.; Llacer, J.
1989-02-01
In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.
Learning Bayesian networks using genetic algorithm
Institute of Scientific and Technical Information of China (English)
Chen Fei; Wang Xiufeng; Rao Yimei
2007-01-01
A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while the others not.Moreover it facilitates the computation greatly. In order to reduce the search space, the notation of equivalent class proposed by David Chickering is adopted. Instead of using the method directly, the novel criterion, variable ordering, and equivalent class are combined,moreover the proposed mthod avoids some problems caused by the previous one. Later, the genetic algorithm which allows global convergence, lack in the most of the methods searching for Bayesian network is applied to search for a good model in thisspace. To speed up the convergence, the genetic algorithm is combined with the greedy algorithm. Finally, the simulation shows the validity of the proposed approach.
Computationally efficient Bayesian tracking
Aughenbaugh, Jason; La Cour, Brian
2012-06-01
In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.
Bayesian nonparametric data analysis
Müller, Peter; Jara, Alejandro; Hanson, Tim
2015-01-01
This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.
Bayesian Geostatistical Design
DEFF Research Database (Denmark)
Diggle, Peter; Lophaven, Søren Nymand
2006-01-01
locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model......This paper describes the use of model-based geostatistics for choosing the set of sampling locations, collectively called the design, to be used in a geostatistical analysis. Two types of design situation are considered. These are retrospective design, which concerns the addition of sampling...
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Bayesian network learning for natural hazard assessments
Vogel, Kristin
2016-04-01
Even though quite different in occurrence and consequences, from a modelling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding. On top of the uncertainty about the modelling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Thus, for reliable natural hazard assessments it is crucial not only to capture and quantify involved uncertainties, but also to express and communicate uncertainties in an intuitive way. Decision-makers, who often find it difficult to deal with uncertainties, might otherwise return to familiar (mostly deterministic) proceedings. In the scope of the DFG research training group „NatRiskChange" we apply the probabilistic framework of Bayesian networks for diverse natural hazard and vulnerability studies. The great potential of Bayesian networks was already shown in previous natural hazard assessments. Treating each model component as random variable, Bayesian networks aim at capturing the joint distribution of all considered variables. Hence, each conditional distribution of interest (e.g. the effect of precautionary measures on damage reduction) can be inferred. The (in-)dependencies between the considered variables can be learned purely data driven or be given by experts. Even a combination of both is possible. By translating the (in-)dependences into a graph structure, Bayesian networks provide direct insights into the workings of the system and allow to learn about the underlying processes. Besides numerous studies on the topic, learning Bayesian networks from real-world data remains challenging. In previous studies, e.g. on earthquake induced ground motion and flood damage assessments, we tackled the problems arising with continuous variables
Implementing Bayesian Vector Autoregressions Implementing Bayesian Vector Autoregressions
Directory of Open Access Journals (Sweden)
Richard M. Todd
1988-03-01
Full Text Available Implementing Bayesian Vector Autoregressions This paper discusses how the Bayesian approach can be used to construct a type of multivariate forecasting model known as a Bayesian vector autoregression (BVAR. In doing so, we mainly explain Doan, Littermann, and Sims (1984 propositions on how to estimate a BVAR based on a certain family of prior probability distributions. indexed by a fairly small set of hyperparameters. There is also a discussion on how to specify a BVAR and set up a BVAR database. A 4-variable model is used to iliustrate the BVAR approach.
Bayesian Methods for Radiation Detection and Dosimetry
International Nuclear Information System (INIS)
We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model
Gupta, Pawan; Joiner, Joanna; Vasilkov, Alexander; Bhartia, Pawan K.
2016-07-01
Estimates of top-of-the-atmosphere (TOA) radiative flux are essential for the understanding of Earth's energy budget and climate system. Clouds, aerosols, water vapor, and ozone (O3) are among the most important atmospheric agents impacting the Earth's shortwave (SW) radiation budget. There are several sensors in orbit that provide independent information related to these parameters. Having coincident information from these sensors is important for understanding their potential contributions. The A-train constellation of satellites provides a unique opportunity to analyze data from several of these sensors. In this paper, retrievals of cloud/aerosol parameters and total column ozone (TCO) from the Aura Ozone Monitoring Instrument (OMI) have been collocated with the Aqua Clouds and Earth's Radiant Energy System (CERES) estimates of total reflected TOA outgoing SW flux (SWF). We use these data to develop a variety of neural networks that estimate TOA SWF globally over ocean and land using only OMI data and other ancillary information as inputs and CERES TOA SWF as the output for training purposes. OMI-estimated TOA SWF from the trained neural networks reproduces independent CERES data with high fidelity. The global mean daily TOA SWF calculated from OMI is consistently within ±1 % of CERES throughout the year 2007. Application of our neural network method to other sensors that provide similar retrieved parameters, both past and future, can produce similar estimates TOA SWF. For example, the well-calibrated Total Ozone Mapping Spectrometer (TOMS) series could provide estimates of TOA SWF dating back to late 1978.
Dynamic Bayesian diffusion estimation
Dedecius, K
2012-01-01
The rapidly increasing complexity of (mainly wireless) ad-hoc networks stresses the need of reliable distributed estimation of several variables of interest. The widely used centralized approach, in which the network nodes communicate their data with a single specialized point, suffers from high communication overheads and represents a potentially dangerous concept with a single point of failure needing special treatment. This paper's aim is to contribute to another quite recent method called diffusion estimation. By decentralizing the operating environment, the network nodes communicate just within a close neighbourhood. We adopt the Bayesian framework to modelling and estimation, which, unlike the traditional approaches, abstracts from a particular model case. This leads to a very scalable and universal method, applicable to a wide class of different models. A particularly interesting case - the Gaussian regressive model - is derived as an example.
Book review: Bayesian analysis for population ecology
Link, William A.
2011-01-01
Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm
Directory of Open Access Journals (Sweden)
Raj Kumar
2012-12-01
Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.
Bayesian inference for OPC modeling
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
BAYESIAN APPROACH OF DECISION PROBLEMS
Directory of Open Access Journals (Sweden)
DRAGOŞ STUPARU
2010-01-01
Full Text Available Management is nowadays a basic vector of economic development, a concept frequently used in our country as well as all over the world. Indifferently of the hierarchical level at which the managerial process is manifested, decision represents its essential moment, the supreme act of managerial activity. Its can be met in all fields of activity, practically having an unlimited degree of coverage, and in all the functions of management. It is common knowledge that the activity of any type of manger, no matter the hierarchical level he occupies, represents a chain of interdependent decisions, their aim being the elimination or limitation of the influence of disturbing factors that may endanger the achievement of predetermined objectives, and the quality of managerial decisions condition the progress and viability of any enterprise. Therefore, one of the principal characteristics of a successful manager is his ability to adopt the most optimal decisions of high quality. The quality of managerial decisions are conditioned by the manager’s general level of education and specialization, the manner in which they are preoccupied to assimilate the latest information and innovations in the domain of management’s theory and practice and the applying of modern managerial methods and techniques in the activity of management. We are presenting below the analysis of decision problems in hazardous conditions in terms of Bayesian theory – a theory that uses the probabilistic calculus.
PAC-Bayesian Analysis of Martingales and Multiarmed Bandits
Seldin, Yevgeny; Shawe-Taylor, John; Peters, Jan; Auer, Peter
2011-01-01
We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many ...
A Neural Network Approach for GMA Butt Joint Welding
DEFF Research Database (Denmark)
Christensen, Kim Hardam; Sørensen, Torben
2003-01-01
This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...... squares has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training. Finally, a predictive closed-loop control strategy based on a so-called single-neuron self...
International Nuclear Information System (INIS)
Highlights: • It is presented a new method based on Artificial Neural Network (ANN) developed to deal with accident identification in PWR nuclear power plants. • Obtained results have shown the efficiency of the referred technique. • Results obtained with this method are as good as or even better to similar optimization tools available in the literature. - Abstract: The task of monitoring a nuclear power plant consists on determining, continuously and in real time, the state of the plant’s systems in such a way to give indications of abnormalities to the operators and enable them to recognize anomalies in system behavior. The monitoring is based on readings of a large number of meters and alarm indicators which are located in the main control room of the facility. On the occurrence of a transient or of an accident on the nuclear power plant, even the most experienced operators can be confronted with conflicting indications due to the interactions between the various components of the plant systems; since a disturbance of a system can cause disturbances on another plant system, thus the operator may not be able to distinguish what is cause and what is the effect. This cognitive overload, to which operators are submitted, causes a difficulty in understanding clearly the indication of an abnormality in its initial phase of development and in taking the appropriate and immediate corrective actions to face the system failure. With this in mind, computerized monitoring systems based on artificial intelligence that could help the operators to detect and diagnose these failures have been devised and have been the subject of research. Among the techniques that can be used in such development, radial basis functions (RBFs) neural networks play an important role due to the fact that they are able to provide good approximations to functions of a finite number of real variables. This paper aims to present an application of a neural network of Gaussian radial basis
Taylor, M.; Kazadzis, S.; Tsekeri, A.; Gkikas, A.; Amiridis, V.
2014-09-01
In order to exploit the full-earth viewing potential of satellite instruments to globally characterise aerosols, new algorithms are required to deduce key microphysical parameters like the particle size distribution and optical parameters associated with scattering and absorption from space remote sensing data. Here, a methodology based on neural networks is developed to retrieve such parameters from satellite inputs and to validate them with ground-based remote sensing data. For key combinations of input variables available from the MODerate resolution Imaging Spectro-radiometer (MODIS) and the Ozone Measuring Instrument (OMI) Level 3 data sets, a grid of 100 feed-forward neural network architectures is produced, each having a different number of neurons and training proportion. The networks are trained with principal components accounting for 98% of the variance of the inputs together with principal components formed from 38 AErosol RObotic NETwork (AERONET) Level 2.0 (Version 2) retrieved parameters as outputs. Daily averaged, co-located and synchronous data drawn from a cluster of AERONET sites centred on the peak of dust extinction in Northern Africa is used for network training and validation, and the optimal network architecture for each input parameter combination is identified with reference to the lowest mean squared error. The trained networks are then fed with unseen data at the coastal dust site Dakar to test their simulation performance. A neural network (NN), trained with co-located and synchronous satellite inputs comprising three aerosol optical depth measurements at 470, 550 and 660 nm, plus the columnar water vapour (from MODIS) and the modelled absorption aerosol optical depth at 500 nm (from OMI), was able to simultaneously retrieve the daily averaged size distribution, the coarse mode volume, the imaginary part of the complex refractive index, and the spectral single scattering albedo - with moderate precision: correlation coefficients in the
Irregular-Time Bayesian Networks
Ramati, Michael
2012-01-01
In many fields observations are performed irregularly along time, due to either measurement limitations or lack of a constant immanent rate. While discrete-time Markov models (as Dynamic Bayesian Networks) introduce either inefficient computation or an information loss to reasoning about such processes, continuous-time Markov models assume either a discrete state space (as Continuous-Time Bayesian Networks), or a flat continuous state space (as stochastic dif- ferential equations). To address these problems, we present a new modeling class called Irregular-Time Bayesian Networks (ITBNs), generalizing Dynamic Bayesian Networks, allowing substantially more compact representations, and increasing the expressivity of the temporal dynamics. In addition, a globally optimal solution is guaranteed when learning temporal systems, provided that they are fully observed at the same irregularly spaced time-points, and a semiparametric subclass of ITBNs is introduced to allow further adaptation to the irregular nature of t...
Bayesian logistic betting strategy against probability forecasting
Kumon, Masayuki; Takemura, Akimichi; Takeuchi, Kei
2012-01-01
We propose a betting strategy based on Bayesian logistic regression modeling for the probability forecasting game in the framework of game-theoretic probability by Shafer and Vovk (2001). We prove some results concerning the strong law of large numbers in the probability forecasting game with side information based on our strategy. We also apply our strategy for assessing the quality of probability forecasting by the Japan Meteorological Agency. We find that our strategy beats the agency by exploiting its tendency of avoiding clear-cut forecasts.
Neural networks and statistical learning
Du, Ke-Lin
2014-01-01
Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...
Neuronanatomy, neurology and Bayesian networks
Bielza Lozoya, Maria Concepcion
2014-01-01
Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...
Designing neural networks that process mean values of random variables
Energy Technology Data Exchange (ETDEWEB)
Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)
2014-06-13
We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.
Rate-optimal Bayesian intensity smoothing for inhomogeneous Poisson processes
E. Belitser; P. Serra; H. van Zanten
2015-01-01
We apply nonparametric Bayesian methods to study the problem of estimating the intensity function of an inhomogeneous Poisson process. To motivate our results we start by analyzing count data coming from a call center which we model as a Poisson process. This analysis is carried out using a certain
Exploiting sensitivity analysis in Bayesian networks for consumer satisfaction study
Jaronski, W.; Bloemer, J.M.M.; Vanhoof, K.; Wets, G.
2004-01-01
The paper presents an application of Bayesian network technology in a empirical customer satisfaction study. The findings of the study should provide insight as to the importance of product/service dimensions in terms of the strength of their influence on overall satisfaction. To this end we apply a
Bayesian multi-QTL mapping for growth curve parameters
DEFF Research Database (Denmark)
Heuven, Henri C M; Janss, Luc L G
2010-01-01
segregating QTL using a Bayesian algorithm. Results For each individual a logistic growth curve was fitted and three latent variables: asymptote (ASYM), inflection point (XMID) and scaling factor (SCAL) were estimated per individual. Applying an 'animal' model showed heritabilities of approximately 48...
Dale Poirier
2008-01-01
This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.
Dynamic Batch Bayesian Optimization
Azimi, Javad; Fern, Xiaoli
2011-01-01
Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On the other hand, requesting a fix-sized batch of experiments at each iteration causes performance inefficiency in BO compared to the sequential policies. In this paper, we present an algorithm that asks a batch of experiments at each time step t where the batch size p_t is dynamically determined in each step. Our algorithm is based on the observation that the sequence of experiments selected by the sequential policy can sometimes be almost independent from each other. Our algorithm identifies such scenarios and request those experiments at the same time without degrading the performance. We evaluate our proposed method us...
Nonparametric Bayesian Classification
Coram, M A
2002-01-01
A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...
Bayesian Inference in the Modern Design of Experiments
DeLoach, Richard
2008-01-01
This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.
Ocean wave forecasting using recurrent neural networks
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Prabaharan, N.
, merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...
Cisneros, Felipe; Veintimilla, Jaime
2013-04-01
The main aim of this research is to create a model of Artificial Neural Networks (ANN) that allows predicting the flow in Tomebamba River both, at real time and in a certain day of year. As inputs we are using information of rainfall and flow of the stations along of the river. This information is organized in scenarios and each scenario is prepared to a specific area. The information is acquired from the hydrological stations placed in the watershed using an electronic system developed at real time and it supports any kind or brands of this type of sensors. The prediction works very good three days in advance This research includes two ANN models: Back propagation and a hybrid model between back propagation and OWO-HWO. These last two models have been tested in a preliminary research. To validate the results we are using some error indicators such as: MSE, RMSE, EF, CD and BIAS. The results of this research reached high levels of reliability and the level of error are minimal. These predictions are useful for flood and water quality control and management at City of Cuenca Ecuador
The Bayesian Modelling Of Inflation Rate In Romania
Directory of Open Access Journals (Sweden)
Mihaela Simionescu (Bratu
2014-06-01
Full Text Available Bayesian econometrics knew a considerable increase in popularity in the last years, joining the interests of various groups of researchers in economic sciences and additional ones as specialists in econometrics, commerce, industry, marketing, finance, micro-economy, macro-economy and other domains. The purpose of this research is to achieve an introduction in Bayesian approach applied in economics, starting with Bayes theorem. For the Bayesian linear regression models the methodology of estimation was presented, realizing two empirical studies for data taken from the Romanian economy. Thus, an autoregressive model of order 2 and a multiple regression model were built for the index of consumer prices. The Gibbs sampling algorithm was used for estimation in R software, computing the posterior means and the standard deviations. The parameters’ stability proved to be greater than in the case of estimations based on the methods of classical Econometrics.
A Bayesian variable selection procedure for ranking overlapping gene sets
DEFF Research Database (Denmark)
Skarman, Axel; Mahdi Shariati, Mohammad; Janss, Luc;
2012-01-01
variable selection to differential expression to prioritize the molecular and genetic pathways involved in the responses to Escherichia coli infection in Danish Holstein cows. Results We used a Bayesian variable selection method to prioritize Kyoto Encyclopedia of Genes and Genomes pathways. We used our......Background Genome-wide expression profiling using microarrays or sequence-based technologies allows us to identify genes and genetic pathways whose expression patterns influence complex traits. Different methods to prioritize gene sets, such as the genes in a given molecular pathway, have been...... described. In many cases, these methods test one gene set at a time, and therefore do not consider overlaps among the pathways. Here, we present a Bayesian variable selection method to prioritize gene sets that overcomes this limitation by considering all gene sets simultaneously. We applied Bayesian...
Fast Bayesian inference of optical trap stiffness and particle diffusion
Bera, Sudipta; Singh, Rajesh; Ghosh, Dipanjan; Kundu, Avijit; Banerjee, Ayan; Adhikari, R
2016-01-01
Bayesian inference provides a principled way of estimating the parameters of a stochastic process that is observed discretely in time. The overdamped Brownian motion of a particle confined in an optical trap is generally modelled by the Ornstein-Uhlenbeck process and can be observed directly in experiment. Here we present Bayesian methods for inferring the parameters of this process, the trap stiffness and the particle diffusion coefficient, that use exact likelihoods and sufficient statistics to arrive at simple expressions for the maximum a posteriori estimates. This obviates the need for Monte Carlo sampling and yields methods that are both fast and accurate. We apply these to experimental data and demonstrate their advantage over commonly used non-Bayesian fitting methods.
The subjectivity of scientists and the Bayesian approach
Press, James S
2016-01-01
"Press and Tanur argue that subjectivity has not only played a significant role in the advancement of science but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the more classical twentieth-century methods." — SciTech Book News. "An insightful work." ― Choice. "Compilation of interesting popular problems … this book is fascinating." — Short Book Reviews, International Statistical Institute. Subjectivity ― including intuition, hunches, and personal beliefs ― has played a key role in scientific discovery. This intriguing book illustrates subjective influences on scientific progress with historical accounts and biographical sketches of more than a dozen luminaries, including Aristotle, Galileo, Newton, Darwin, Pasteur, Freud, Einstein, Margaret Mead, and others. The treatment also offers a detailed examination of the modern Bayesian approach to data analysis, with references to the Bayesian theoretical and applied literature. Suitable for...
Prediction of road accidents: A Bayesian hierarchical approach
DEFF Research Database (Denmark)
Deublein, Markus; Schubert, Matthias; Adey, Bryan T.;
2013-01-01
-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks...... in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models.Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis...... of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions...
Bayesian networks as a tool for epidemiological systems analysis
Lewis, F. I.
2012-11-01
Bayesian network analysis is a form of probabilistic modeling which derives from empirical data a directed acyclic graph (DAG) describing the dependency structure between random variables. Bayesian networks are increasingly finding application in areas such as computational and systems biology, and more recently in epidemiological analyses. The key distinction between standard empirical modeling approaches, such as generalised linear modeling, and Bayesian network analyses is that the latter attempts not only to identify statistically associated variables, but to additionally, and empirically, separate these into those directly and indirectly dependent with one or more outcome variables. Such discrimination is vastly more ambitious but has the potential to reveal far more about key features of complex disease systems. Applying Bayesian network modeling to biological and medical data has considerable computational demands, combined with the need to ensure robust model selection given the vast model space of possible DAGs. These challenges require the use of approximation techniques, such as the Laplace approximation, Markov chain Monte Carlo simulation and parametric bootstrapping, along with computational parallelization. A case study in structure discovery - identification of an optimal DAG for given data - is presented which uses additive Bayesian networks to explore veterinary disease data of industrial and medical relevance.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Historical Developments in Bayesian Econometrics after Cowles Foundation Monographs 10, 14
Basturk, Nalan; Cakmakli, Cem; Ceyhan, S. Pinar; Herman K. van Dijk
2013-01-01
After a brief description of the first Bayesian steps into econometrics in the 1960s and early 70s, publication and citation patterns are analyzed in ten major econometric journals until 2012. The results indicate that journals which contain both theoretical and applied papers, such as Journal of Econometrics, Journal of Business and Economic Statistics and Journal of Applied Econometrics, publish the large majority of high quality Bayesian econometric papers in contrast to theoretical journa...
Directory of Open Access Journals (Sweden)
Kleber Rogério Moreira Prado
2013-06-01
Full Text Available Current essay forwards a biodegradation model of a dye, used in the textile industry, based on a neural network propped by bootstrap remodeling. Bootstrapped neural network is set to generate estimates that are close to results obtained in an intrinsic experience in which a chemical process is applied. Pseudomonas oleovorans was used in the biodegradation of reactive Black 5. Results show a brief comparison between the information estimated by the proposed approach and the experimental data, with a coefficient of correlation between real and predicted values for a more than 0.99 biodegradation rate. Dye concentration and the solution’s pH failed to interfere in biodegradation index rates. A value above 90% of dye biodegradation was achieved between 1.000 and 1.841 mL 10 mL-1 of microorganism concentration and between 1.000 and 2.000 g 100 mL-1 of glucose concentration within the experimental conditions under analysis.
Attention in a bayesian framework
DEFF Research Database (Denmark)
Whiteley, Louise Emma; Sahani, Maneesh
2012-01-01
The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models...... of perception, and use this observation to frame a new computational account of the need for, and action of, attention - unifying diverse attentional phenomena in a way that goes beyond previous inferential, probabilistic and Bayesian models. Attentional effects are most evident in cluttered environments......, and include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Bayesian Methods and Universal Darwinism
Campbell, John
2010-01-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...
Bayesian inference tools for inverse problems
Mohammad-Djafari, Ali
2013-08-01
In this paper, first the basics of Bayesian inference with a parametric model of the data is presented. Then, the needed extensions are given when dealing with inverse problems and in particular the linear models such as Deconvolution or image reconstruction in Computed Tomography (CT). The main point to discuss then is the prior modeling of signals and images. A classification of these priors is presented, first in separable and Markovien models and then in simple or hierarchical with hidden variables. For practical applications, we need also to consider the estimation of the hyper parameters. Finally, we see that we have to infer simultaneously on the unknowns, the hidden variables and the hyper parameters. Very often, the expression of this joint posterior law is too complex to be handled directly. Indeed, rarely we can obtain analytical solutions to any point estimators such the Maximum A posteriori (MAP) or Posterior Mean (PM). Three main tools are then can be used: Laplace approximation (LAP), Markov Chain Monte Carlo (MCMC) and Bayesian Variational Approximations (BVA). To illustrate all these aspects, we will consider a deconvolution problem where we know that the input signal is sparse and propose to use a Student-t prior for that. Then, to handle the Bayesian computations with this model, we use the property of Student-t which is modelling it via an infinite mixture of Gaussians, introducing thus hidden variables which are the variances. Then, the expression of the joint posterior of the input signal samples, the hidden variables (which are here the inverse variances of those samples) and the hyper-parameters of the problem (for example the variance of the noise) is given. From this point, we will present the joint maximization by alternate optimization and the three possible approximation methods. Finally, the proposed methodology is applied in different applications such as mass spectrometry, spectrum estimation of quasi periodic biological signals and
Bayesian test and Kuhn's paradigm
Institute of Scientific and Technical Information of China (English)
Chen Xiaoping
2006-01-01
Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.
Perception, illusions and Bayesian inference.
Nour, Matthew M; Nour, Joseph M
2015-01-01
Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.
3D Bayesian contextual classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
2000-01-01
We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....
Energy Technology Data Exchange (ETDEWEB)
Ortiz R, J. M.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico); Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2015-10-15
The process of unfolding the neutron energy spectrum has been the subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in the neutron spectrometry and dosimetry domains, however, the structure and the learning parameters are factors that contribute in a significant way in the networks performance. In artificial neural network domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, which mean require no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training with the development data. In the network development phase, the only hurdle is to tune the hyper parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained can be observed that despite the very similar results, GRNN performs better than BPNN. (Author)
Institute of Scientific and Technical Information of China (English)
苏宏升
2008-01-01
To make conventional Bayesian optimal classifier possess the abilities of disposing fuzzy information and realizing the automation of reasoning process, a new Bayesian optimal classifier is proposed with fuzzy information embedded. It can not only dispose fuzzy information effectively, but also retain learning properties of Bayesian optimal classifier. In addition, according to the evolution of fuzzy set theory, vague set is also imbedded into it to generate vague Bayesian optimal classifier. It can simultaneously simulate the twofold characteristics of fuzzy information from the positive and reverse directions. Further, a set pair Bayesian optimal classifier is also proposed considering the threefold characteristics of fuzzy information from the positive, reverse, and indeterminate sides. In the end, a knowledge-based artificial neural network (KBANN) is presented to realize automatic reasoning of Bayesian optimal classifier. It not only reduces the computational cost of Bayesian optimal classifier but also improves its classification learning quality.
Bayesian variable order Markov models: Towards Bayesian predictive state representations
C. Dimitrakakis
2009-01-01
We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st
Bayesian Inference for Functional Dynamics Exploring in fMRI Data
Directory of Open Access Journals (Sweden)
Xuan Guo
2016-01-01
Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
Bayesian Inference for Functional Dynamics Exploring in fMRI Data.
Guo, Xuan; Liu, Bing; Chen, Le; Chen, Guantao; Pan, Yi; Zhang, Jing
2016-01-01
This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI) data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM), Bayesian Connectivity Change Point Model (BCCPM), and Dynamic Bayesian Variable Partition Model (DBVPM), and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.
Bayesian networks and food security - An introduction
Stein, A.
2004-01-01
This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A Bayesian Nonparametric Approach to Test Equating
Karabatsos, George; Walker, Stephen G.
2009-01-01
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…
A Bayesian approach to earthquake source studies
Minson, Sarah
Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also
Refining gene signatures: a Bayesian approach
Directory of Open Access Journals (Sweden)
Labbe Aurélie
2009-12-01
Full Text Available Abstract Background In high density arrays, the identification of relevant genes for disease classification is complicated by not only the curse of dimensionality but also the highly correlated nature of the array data. In this paper, we are interested in the question of how many and which genes should be selected for a disease class prediction. Our work consists of a Bayesian supervised statistical learning approach to refine gene signatures with a regularization which penalizes for the correlation between the variables selected. Results Our simulation results show that we can most often recover the correct subset of genes that predict the class as compared to other methods, even when accuracy and subset size remain the same. On real microarray datasets, we show that our approach can refine gene signatures to obtain either the same or better predictive performance than other existing methods with a smaller number of genes. Conclusions Our novel Bayesian approach includes a prior which penalizes highly correlated features in model selection and is able to extract key genes in the highly correlated context of microarray data. The methodology in the paper is described in the context of microarray data, but can be applied to any array data (such as micro RNA, for example as a first step towards predictive modeling of cancer pathways. A user-friendly software implementation of the method is available.
Sequential estimation of neural models by Bayesian filtering
Closas Gómez, Pau
2014-01-01
Un dels reptes més difícils de la neurociència és el d'entendre la connectivitat del cervell. Aquest problema es pot tractar des de diverses perspectives, aquí ens centrem en els fenòmens locals que ocorren en una sola neurona. L'objectiu final és, doncs, entendre la dinàmica de les neurones i com la interconnexió amb altres neurones afecta al seu estat. Les observacions de traces del potencial de membrana constitueixen la principal font d'informació per a derivar models matemàtics d'una neur...
Bayesian Classification of Image Structures
DEFF Research Database (Denmark)
Goswami, Dibyendu; Kalkan, Sinan; Krüger, Norbert
2009-01-01
In this paper, we describe work on Bayesian classi ers for distinguishing between homogeneous structures, textures, edges and junctions. We build semi-local classiers from hand-labeled images to distinguish between these four different kinds of structures based on the concept of intrinsic dimensi...
Bayesian Agglomerative Clustering with Coalescents
Teh, Yee Whye; Daumé III, Hal; Roy, Daniel
2009-01-01
We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.
Bayesian NL interpretation and learning
H. Zeevat
2011-01-01
Everyday natural language communication is normally successful, even though contemporary computational linguistics has shown that NL is characterised by very high degree of ambiguity and the results of stochastic methods are not good enough to explain the high success rate. Bayesian natural language
Differentiated Bayesian Conjoint Choice Designs
Z. Sándor (Zsolt); M. Wedel (Michel)
2003-01-01
textabstractPrevious conjoint choice design construction procedures have produced a single design that is administered to all subjects. This paper proposes to construct a limited set of different designs. The designs are constructed in a Bayesian fashion, taking into account prior uncertainty about
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...
3-D contextual Bayesian classifiers
DEFF Research Database (Denmark)
Larsen, Rasmus
In this paper we will consider extensions of a series of Bayesian 2-D contextual classification pocedures proposed by Owen (1984) Hjort & Mohn (1984) and Welch & Salter (1971) and Haslett (1985) to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed...
Bayesian image restoration, using configurations
DEFF Research Database (Denmark)
Thorarinsdottir, Thordis Linda
2006-01-01
configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...
A Fuzzy Quantum Neural Network and Its Application in Pattern Recognition
Institute of Scientific and Technical Information of China (English)
MIAOFuyou; XIONGYan; CHENHuanhuan; WANGXingfu
2005-01-01
This paper proposes a fuzzy quantum neural network model combining quantum neural network and fuzzy logic, which applies the fuzzy logic to design the collapse rules of the quantum neural network, and solves the character recognition problem. Theoretical analysis and experimental results show that fuzzy quantum neural network improves recognizing veracity than the traditional neural network and quantum neural network.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Bayesian analysis of rare events
Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang
2016-06-01
In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.
Neural Networks and Photometric Redshifts
Tagliaferri, R; Andreon, S; Capozziello, S; Donalek, C; Giordano, G; Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo
2002-01-01
We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy of the results, turned out to be the most effective. In the best experiment, the implemented network reached an accuracy of 0.020 (interquartile error) in the range 0
Multigradient for Neural Networks for Equalizers
Directory of Open Access Journals (Sweden)
Chulhee Lee
2003-06-01
Full Text Available Recently, a new training algorithm, multigradient, has been published for neural networks and it is reported that the multigradient outperforms the backpropagation when neural networks are used as a classifier. When neural networks are used as an equalizer in communications, they can be viewed as a classifier. In this paper, we apply the multigradient algorithm to train the neural networks that are used as equalizers. Experiments show that the neural networks trained using the multigradient noticeably outperforms the neural networks trained by the backpropagation.
Bayesian methods for measures of agreement
Broemeling, Lyle D
2009-01-01
Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...
ESTIMATE OF THE HYPSOMETRIC RELATIONSHIP WITH NONLINEAR MODELS FITTED BY EMPIRICAL BAYESIAN METHODS
Directory of Open Access Journals (Sweden)
Monica Fabiana Bento Moreira
2015-09-01
Full Text Available In this paper we propose a Bayesian approach to solve the inference problem with restriction on parameters, regarding to nonlinear models used to represent the hypsometric relationship in clones of Eucalyptus sp. The Bayesian estimates are calculated using Monte Carlo Markov Chain (MCMC method. The proposed method was applied to different groups of actual data from which two were selected to show the results. These results were compared to the results achieved by the minimum square method, highlighting the superiority of the Bayesian approach, since this approach always generate the biologically consistent results for hipsometric relationship.
Bayesian Analysis Made Simple An Excel GUI for WinBUGS
Woodward, Philip
2011-01-01
From simple NLMs to complex GLMMs, this book describes how to use the GUI for WinBUGS - BugsXLA - an Excel add-in written by the author that allows a range of Bayesian models to be easily specified. With case studies throughout, the text shows how to routinely apply even the more complex aspects of model specification, such as GLMMs, outlier robust models, random effects Emax models, auto-regressive errors, and Bayesian variable selection. It provides brief, up-to-date discussions of current issues in the practical application of Bayesian methods. The author also explains how to obtain free so
Applying dynamic Bayesian networks in transliteration detection and generation
Nabende, Peter
2011-01-01
Peter Nabende promoveert op methoden die programma’s voor automatisch vertalen kunnen verbeteren. Hij onderzocht twee systemen voor het genereren en vergelijken van transcripties: een DBN-model (Dynamische Bayesiaanse Netwerken) waarin Pair Hidden Markovmodellen zijn geïmplementeerd en een DBN-model
Applied Bayesian statistical studies in biology and medicine
D’Amore, G; Scalfari, F
2004-01-01
It was written on another occasion· that "It is apparent that the scientific culture, if one means production of scientific papers, is growing exponentially, and chaotically, in almost every field of investigation". The biomedical sciences sensu lato and mathematical statistics are no exceptions. One might say then, and with good reason, that another collection of bio statistical papers would only add to the overflow and cause even more confusion. Nevertheless, this book may be greeted with some interest if we state that most of the papers in it are the result of a collaboration between biologists and statisticians, and partly the product of the Summer School th "Statistical Inference in Human Biology" which reaches its 10 edition in 2003 (information about the School can be obtained at the Web site http://www2. stat. unibo. itleventilSito%20scuolalindex. htm). is common experience - and not only This is rather important. Indeed, it in Italy - that encounters between statisticians and researchers are spora...
What are artificial neural networks?
DEFF Research Database (Denmark)
Krogh, Anders
2008-01-01
Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...
Bayesian versus 'plain-vanilla Bayesian' multitarget statistics
Mahler, Ronald P. S.
2004-08-01
Finite-set statistics (FISST) is a direct generalization of single-sensor, single-target Bayes statistics to the multisensor-multitarget realm, based on random set theory. Various aspects of FISST are being investigated by several research teams around the world. In recent years, however, a few partisans have claimed that a "plain-vanilla Bayesian approach" suffices as down-to-earth, "straightforward," and general "first principles" for multitarget problems. Therefore, FISST is mere mathematical "obfuscation." In this and a companion paper I demonstrate the speciousness of these claims. In this paper I summarize general Bayes statistics, what is required to use it in multisensor-multitarget problems, and why FISST is necessary to make it practical. Then I demonstrate that the "plain-vanilla Bayesian approach" is so heedlessly formulated that it is erroneous, not even Bayesian denigrates FISST concepts while unwittingly assuming them, and has resulted in a succession of algorithms afflicted by inherent -- but less than candidly acknowledged -- computational "logjams."
A Bayesian variable selection procedure to rank overlapping gene sets
Directory of Open Access Journals (Sweden)
Skarman Axel
2012-05-01
Full Text Available Abstract Background Genome-wide expression profiling using microarrays or sequence-based technologies allows us to identify genes and genetic pathways whose expression patterns influence complex traits. Different methods to prioritize gene sets, such as the genes in a given molecular pathway, have been described. In many cases, these methods test one gene set at a time, and therefore do not consider overlaps among the pathways. Here, we present a Bayesian variable selection method to prioritize gene sets that overcomes this limitation by considering all gene sets simultaneously. We applied Bayesian variable selection to differential expression to prioritize the molecular and genetic pathways involved in the responses to Escherichia coli infection in Danish Holstein cows. Results We used a Bayesian variable selection method to prioritize Kyoto Encyclopedia of Genes and Genomes pathways. We used our data to study how the variable selection method was affected by overlaps among the pathways. In addition, we compared our approach to another that ignores the overlaps, and studied the differences in the prioritization. The variable selection method was robust to a change in prior probability and stable given a limited number of observations. Conclusions Bayesian variable selection is a useful way to prioritize gene sets while considering their overlaps. Ignoring the overlaps gives different and possibly misleading results. Additional procedures may be needed in cases of highly overlapping pathways that are hard to prioritize.
Bayesian approach to rough set
Marwala, Tshilidzi
2007-01-01
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
Bayesian priors for transiting planets
Kipping, David M
2016-01-01
As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...
Bayesian Source Separation and Localization
Knuth, K H
1998-01-01
The problem of mixed signals occurs in many different contexts; one of the most familiar being acoustics. The forward problem in acoustics consists of finding the sound pressure levels at various detectors resulting from sound signals emanating from the active acoustic sources. The inverse problem consists of using the sound recorded by the detectors to separate the signals and recover the original source waveforms. In general, the inverse problem is unsolvable without additional information. This general problem is called source separation, and several techniques have been developed that utilize maximum entropy, minimum mutual information, and maximum likelihood. In previous work, it has been demonstrated that these techniques can be recast in a Bayesian framework. This paper demonstrates the power of the Bayesian approach, which provides a natural means for incorporating prior information into a source model. An algorithm is developed that utilizes information regarding both the statistics of the amplitudes...
Bayesian Inference for Radio Observations
Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin
2015-01-01
(Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...
A Bayesian framework for knowledge attribution: Evidence from semantic integration
Powell, D; Horne, Z; Pinillos, NÁ; Holyoak, KJ
2015-01-01
© 2015 Elsevier B.V. We propose a Bayesian framework for the attribution of knowledge, and apply this framework to generate novel predictions about knowledge attribution for different types of "Gettier cases", in which an agent is led to a justified true belief yet has made erroneous assumptions. We tested these predictions using a paradigm based on semantic integration. We coded the frequencies with which participants falsely recalled the word "thought" as "knew" (or a near synonym), yieldin...
Bayesian kinematic earthquake source models
Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.
2009-12-01
Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high
Bayesian Stable Isotope Mixing Models
Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard
2012-01-01
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...
Bayesian Network--Response Regression
WANG, LU; Durante, Daniele; Dunson, David B.
2016-01-01
There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Mohammad-Djafari, Ali
2007-01-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian segmentation of hyperspectral images
Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali
2004-11-01
In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.
Bayesian analysis of contingency tables
Gómez Villegas, Miguel A.; González Pérez, Beatriz
2005-01-01
The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...
Bayesian estimation of turbulent motion
Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni
2013-01-01
International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...
Bayesian Kernel Mixtures for Counts
Canale, Antonio; David B Dunson
2011-01-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...
Space Shuttle RTOS Bayesian Network
Morris, A. Terry; Beling, Peter A.
2001-01-01
With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores
International Nuclear Information System (INIS)
In this paper, a hierarchical Bayesian learning scheme for autoregressive neural network models is shown which overcomes the problem of identifying the separate linear and nonlinear parts modelled by the network. We show how the identification can be carried out by defining suitable priors on the parameter space which help the learning algorithms to avoid undesired parameter configurations. Some applications to synthetic and real world experimental data are shown to validate the proposed methodology
Energy Technology Data Exchange (ETDEWEB)
Acernese, F [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Barone, F [Dipartimento di Scienze Farmaceutiche, Universita di Salerno, Fisciano, SA (Italy); De Rosa, R [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Eleuteri, A [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Milano, L [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Tagliaferri, R [Dipartimento di Matematica ed Informatica, Universita di Salerno, Baronissi, SA (Italy)
2005-09-21
In this paper, a hierarchical Bayesian learning scheme for autoregressive neural network models is shown which overcomes the problem of identifying the separate linear and nonlinear parts modelled by the network. We show how the identification can be carried out by defining suitable priors on the parameter space which help the learning algorithms to avoid undesired parameter configurations. Some applications to synthetic and real world experimental data are shown to validate the proposed methodology.
Bayesian second law of thermodynamics.
Bartolotta, Anthony; Carroll, Sean M; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as ΔH(ρ_{m},ρ)+〈Q〉_{F|m}≥0, where ΔH(ρ_{m},ρ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρ_{m} and 〈Q〉_{F|m} is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples. PMID:27627241
Bayesian second law of thermodynamics
Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason
2016-08-01
We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.
Bayesian detection of causal rare variants under posterior consistency.
Directory of Open Access Journals (Sweden)
Faming Liang
Full Text Available Identification of causal rare variants that are associated with complex traits poses a central challenge on genome-wide association studies. However, most current research focuses only on testing the global association whether the rare variants in a given genomic region are collectively associated with the trait. Although some recent work, e.g., the Bayesian risk index method, have tried to address this problem, it is unclear whether the causal rare variants can be consistently identified by them in the small-n-large-P situation. We develop a new Bayesian method, the so-called Bayesian Rare Variant Detector (BRVD, to tackle this problem. The new method simultaneously addresses two issues: (i (Global association test Are there any of the variants associated with the disease, and (ii (Causal variant detection Which variants, if any, are driving the association. The BRVD ensures the causal rare variants to be consistently identified in the small-n-large-P situation by imposing some appropriate prior distributions on the model and model specific parameters. The numerical results indicate that the BRVD is more powerful for testing the global association than the existing methods, such as the combined multivariate and collapsing test, weighted sum statistic test, RARECOVER, sequence kernel association test, and Bayesian risk index, and also more powerful for identification of causal rare variants than the Bayesian risk index method. The BRVD has also been successfully applied to the Early-Onset Myocardial Infarction (EOMI Exome Sequence Data. It identified a few causal rare variants that have been verified in the literature.
Bayesian detection of causal rare variants under posterior consistency.
Liang, Faming
2013-07-26
Identification of causal rare variants that are associated with complex traits poses a central challenge on genome-wide association studies. However, most current research focuses only on testing the global association whether the rare variants in a given genomic region are collectively associated with the trait. Although some recent work, e.g., the Bayesian risk index method, have tried to address this problem, it is unclear whether the causal rare variants can be consistently identified by them in the small-n-large-P situation. We develop a new Bayesian method, the so-called Bayesian Rare Variant Detector (BRVD), to tackle this problem. The new method simultaneously addresses two issues: (i) (Global association test) Are there any of the variants associated with the disease, and (ii) (Causal variant detection) Which variants, if any, are driving the association. The BRVD ensures the causal rare variants to be consistently identified in the small-n-large-P situation by imposing some appropriate prior distributions on the model and model specific parameters. The numerical results indicate that the BRVD is more powerful for testing the global association than the existing methods, such as the combined multivariate and collapsing test, weighted sum statistic test, RARECOVER, sequence kernel association test, and Bayesian risk index, and also more powerful for identification of causal rare variants than the Bayesian risk index method. The BRVD has also been successfully applied to the Early-Onset Myocardial Infarction (EOMI) Exome Sequence Data. It identified a few causal rare variants that have been verified in the literature.
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Bayesian Posterior Distributions Without Markov Chains
Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.
2012-01-01
Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Institute of Scientific and Technical Information of China (English)
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Bayesian modeling of ChIP-chip data using latent variables.
Wu, Mingqi
2009-10-26
BACKGROUND: The ChIP-chip technology has been used in a wide range of biomedical studies, such as identification of human transcription factor binding sites, investigation of DNA methylation, and investigation of histone modifications in animals and plants. Various methods have been proposed in the literature for analyzing the ChIP-chip data, such as the sliding window methods, the hidden Markov model-based methods, and Bayesian methods. Although, due to the integrated consideration of uncertainty of the models and model parameters, Bayesian methods can potentially work better than the other two classes of methods, the existing Bayesian methods do not perform satisfactorily. They usually require multiple replicates or some extra experimental information to parametrize the model, and long CPU time due to involving of MCMC simulations. RESULTS: In this paper, we propose a Bayesian latent model for the ChIP-chip data. The new model mainly differs from the existing Bayesian models, such as the joint deconvolution model, the hierarchical gamma mixture model, and the Bayesian hierarchical model, in two respects. Firstly, it works on the difference between the averaged treatment and control samples. This enables the use of a simple model for the data, which avoids the probe-specific effect and the sample (control/treatment) effect. As a consequence, this enables an efficient MCMC simulation of the posterior distribution of the model, and also makes the model more robust to the outliers. Secondly, it models the neighboring dependence of probes by introducing a latent indicator vector. A truncated Poisson prior distribution is assumed for the latent indicator variable, with the rationale being justified at length. CONCLUSION: The Bayesian latent method is successfully applied to real and ten simulated datasets, with comparisons with some of the existing Bayesian methods, hidden Markov model methods, and sliding window methods. The numerical results indicate that the
Bayesian modeling of ChIP-chip data using latent variables
Directory of Open Access Journals (Sweden)
Tian Yanan
2009-10-01
Full Text Available Abstract Background The ChIP-chip technology has been used in a wide range of biomedical studies, such as identification of human transcription factor binding sites, investigation of DNA methylation, and investigation of histone modifications in animals and plants. Various methods have been proposed in the literature for analyzing the ChIP-chip data, such as the sliding window methods, the hidden Markov model-based methods, and Bayesian methods. Although, due to the integrated consideration of uncertainty of the models and model parameters, Bayesian methods can potentially work better than the other two classes of methods, the existing Bayesian methods do not perform satisfactorily. They usually require multiple replicates or some extra experimental information to parametrize the model, and long CPU time due to involving of MCMC simulations. Results In this paper, we propose a Bayesian latent model for the ChIP-chip data. The new model mainly differs from the existing Bayesian models, such as the joint deconvolution model, the hierarchical gamma mixture model, and the Bayesian hierarchical model, in two respects. Firstly, it works on the difference between the averaged treatment and control samples. This enables the use of a simple model for the data, which avoids the probe-specific effect and the sample (control/treatment effect. As a consequence, this enables an efficient MCMC simulation of the posterior distribution of the model, and also makes the model more robust to the outliers. Secondly, it models the neighboring dependence of probes by introducing a latent indicator vector. A truncated Poisson prior distribution is assumed for the latent indicator variable, with the rationale being justified at length. Conclusion The Bayesian latent method is successfully applied to real and ten simulated datasets, with comparisons with some of the existing Bayesian methods, hidden Markov model methods, and sliding window methods. The numerical results
Energy Technology Data Exchange (ETDEWEB)
Toda-Caraballo, I.; Garcia-Mateo, C.; Capdevila, C.
2010-07-01
At the beginning of the decade of the nineties, the industrial interest for TRIP steels leads to a significant increase of the investigation and application in this field. In this work, the flexibility of neural networks for the modelling of complex properties is used to tackle the problem of determining the retained austenite content in TRIP-steel. Applying a combination of two learning algorithms (backpropagation and creeping-random-search) for the neural network, a model has been created that enables the prediction of retained austenite in low-Si / low-Al multiphase steels as a function of processing parameters. (Author). 34 refs.
Updating reliability data using feedback analysis: feasibility of a Bayesian subjective method
International Nuclear Information System (INIS)
For years, EDF has used Probabilistic Safety Assessment to evaluate a global indicator of the safety of its nuclear power plants and to optimize the performance while ensuring a certain safety level. Therefore, robustness and relevancy of PSA are very important. That is the reason why EDF wants to improve the relevancy of the reliability parameters used in these models. This article aims to propose a Bayesian approach to build PSA parameters when feedback data is not large enough to use the frequentist method. Our method is called subjective because its purpose is to give engineers pragmatic criteria to apply Bayesian in a controlled and consistent way. Using Bayesian is quite common for example in the United States, because the nuclear power plants are less standardized. Bayesian is often used with generic data as prior. So we have to adapt the general methodology within EDF context. (authors)
Inherently irrational? A computational model of escalation of commitment as Bayesian Updating.
Gilroy, Shawn P; Hantula, Donald A
2016-06-01
Monte Carlo simulations were performed to analyze the degree to which two-, three- and four-step learning histories of losses and gains correlated with escalation and persistence in extended extinction (continuous loss) conditions. Simulated learning histories were randomly generated at varying lengths and compositions and warranted probabilities were determined using Bayesian Updating methods. Bayesian Updating predicted instances where particular learning sequences were more likely to engender escalation and persistence under extinction conditions. All simulations revealed greater rates of escalation and persistence in the presence of heterogeneous (e.g., both Wins and Losses) lag sequences, with substantially increased rates of escalation when lags comprised predominantly of losses were followed by wins. These methods were then applied to human investment choices in earlier experiments. The Bayesian Updating models corresponded with data obtained from these experiments. These findings suggest that Bayesian Updating can be utilized as a model for understanding how and when individual commitment may escalate and persist despite continued failures.
Carvalho, Pedro; Marques, Rui Cunha
2016-02-15
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services.
Directory of Open Access Journals (Sweden)
Limin Wang
2015-06-01
Full Text Available As one of the most common types of graphical models, the Bayesian classifier has become an extremely popular approach to dealing with uncertainty and complexity. The scoring functions once proposed and widely used for a Bayesian network are not appropriate for a Bayesian classifier, in which class variable C is considered as a distinguished one. In this paper, we aim to clarify the working mechanism of Bayesian classifiers from the perspective of the chain rule of joint probability distribution. By establishing the mapping relationship between conditional probability distribution and mutual information, a new scoring function, Sum_MI, is derived and applied to evaluate the rationality of the Bayesian classifiers. To achieve global optimization and high dependence representation, the proposed learning algorithm, the flexible K-dependence Bayesian (FKDB classifier, applies greedy search to extract more information from the K-dependence network structure. Meanwhile, during the learning procedure, the optimal attribute order is determined dynamically, rather than rigidly. In the experimental study, functional dependency analysis is used to improve model interpretability when the structure complexity is restricted.
Evolvable synthetic neural system
Curtis, Steven A. (Inventor)
2009-01-01
An evolvable synthetic neural system includes an evolvable neural interface operably coupled to at least one neural basis function. Each neural basis function includes an evolvable neural interface operably coupled to a heuristic neural system to perform high-level functions and an autonomic neural system to perform low-level functions. In some embodiments, the evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy.
Neural fields theory and applications
Graben, Peter; Potthast, Roland; Wright, James
2014-01-01
With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...
Smartphones Get Emotional: Mind Reading Images and Reconstructing the Neural Sources
DEFF Research Database (Denmark)
Petersen, Michael Kai; Stahlhut, Carsten; Stopczynski, Arkadiusz;
2011-01-01
but may also provide an intuitive interface for interacting with a 3D rendered model of brain activity. Integrating a wireless EEG set with a smartphone thus offers completely new opportunities for modeling the mental state of users as well as providing a basis for novel bio-feedback applications.......Combining a 14 channel neuroheadset with a smartphone to capture and process brain imaging data, we demonstrate the ability to distinguish among emotional responses re ected in dierent scalp potentials when viewing pleasant and unpleasant pictures compared to neutral content. Clustering independent...... components across subjects we are able to remove artifacts and identify common sources of synchronous brain activity, consistent with earlier ndings based on conventional EEG equipment. Applying a Bayesian approach to reconstruct the neural sources not only facilitates dierentiation of emotional responses...
A Novel Method for Nonlinear Time Series Forecasting of Time-Delay Neural Network
Institute of Scientific and Technical Information of China (English)
JIANG Weijin; XU Yuhui
2006-01-01
Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization.Furthermore, the model is applied to forecast the import and export trades in one industry.The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business.Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system.While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably catch' the dynamic characteristic of the nonlinear system which produced the origin serial.
Bayesian probabilities of earthquake occurrences in Longmenshan fault system (China)
Wang, Ying; Zhang, Keyin; Gan, Qigang; Zhou, Wen; Xiong, Liang; Zhang, Shihua; Liu, Chao
2015-01-01
China has a long history of earthquake records, and the Longmenshan fault system (LFS) is a famous earthquake zone. We believed that the LFS could be divided into three seismogenic zones (north, central, and south zones) based on the geological structures and the earthquake catalog. We applied the Bayesian probability method using extreme-value distribution of earthquake occurrences to estimate the seismic hazard in the LFS. The seismic moment, slip rate, earthquake recurrence rate, and magnitude were considered as the basic parameters for computing the Bayesian prior estimates of the seismicity. These estimates were then updated in terms of Bayes' theorem and historical estimates of seismicity in the LFS. Generally speaking, the north zone seemingly is quite peaceful compared with the central and south zones. The central zone is the most dangerous; however, the periodicity of earthquake occurrences for M s = 8.0 is quite long (1,250 to 5,000 years). The selection of upper bound probable magnitude influences the result, and the upper bound magnitude of the south zone maybe 7.5. We obtained the empirical relationship of magnitude conversion for M s and ML, the values of the magnitude of completeness Mc (3.5), and the Gutenberg-Richter b value before applying the Bayesian extreme-value distribution of earthquake occurrences method.
Bayesian Concordance Correlation Coefficient with Application to Repeatedly Measured Data
Directory of Open Access Journals (Sweden)
Atanu BHATTACHARJEE
2015-10-01
Full Text Available Objective: In medical research, Lin's classical concordance correlation coefficient (CCC is frequently applied to evaluate the similarity of the measurements produced by different raters or methods on the same subjects. It is particularly useful for continuous data. The objective of this paper is to propose the Bayesian counterpart to compute CCC for continuous data. Material and Methods: A total of 33 patients of astrocytoma brain treated in the Department of Radiation Oncology at Malabar Cancer Centre is enrolled in this work. It is a continuous data of tumor volume and tumor size repeatedly measured during baseline pretreatment workup and post surgery follow-ups for all patients. The tumor volume and tumor size are measured separately by MRI and CT scan. The agreement of measurement between MRI and CT scan is calculated through CCC. The statistical inference is performed through Markov Chain Monte Carlo (MCMC technique. Results: Bayesian CCC is found suitable to get prominent evidence for test statistics to explore the relation between concordance measurements. The posterior mean estimates and 95% credible interval of CCC on tumor size and tumor volume are observed with 0.96(0.87,0.99 and 0.98(0.95,0.99 respectively. Conclusion: The Bayesian inference is adopted for development of the computational algorithm. The approach illustrated in this work provides the researchers an opportunity to find out the most appropriate model for specific data and apply CCC to fulfill the desired hypothesis.
Bayesian Methods and Universal Darwinism
Campbell, John
2009-12-01
Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.
Bayesian phylogeography finds its roots.
Directory of Open Access Journals (Sweden)
Philippe Lemey
2009-09-01
Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.
Extreme dry spells: Problem of rounding and Bayesian solution
Cindric Kalin, Ksenija; Pasaric, Zoran
2016-04-01
Two theoretically justified models of extremes are applied to dry spell (DS) series: The generalized Pareto distribution is applied to peak-over-threshold data (POT-GP), and the Generalized Extreme Value distribution is applied to the annual maxima (AM-GEV). DS data are categorized according to three precipitation-per-day thresholds (1, 5 and 10 mm). The well-known classical methods for parameter estimation (L-moments and Maximum Likelihood) are applied both to measured and to simulated DS time series. When applied within the GEV model, both methods yield very similar results. Somewhat surprisingly, in the case of the GP model, these methods lead to substantially different estimates of parameters, as well as return values. This is found to be a consequence of the fact that DS values are recorded discretely as a whole number of days, whereas the classical extreme value distributions are intended for continuous data. The inference is further evaluated within the Bayesian paradigm, where the process of rounding can be incorporated in a straightforward manner. The study confirmed precautionary estimations when applying the AM-GEV model in comparison with the simpler AM-Gumbel model. Regarding POT-GP modelling, the Bayesian approach reveals a high uncertainty that can occur in parameter estimations when very high thresholds are considered. It is found that there are no clear criteria in the assessment of some optimal threshold, nor is there a necessity for its detection. Instead, Bayesian inference provides a reasonable overall picture of the range of thresholds compatible with the GP-model. Furthermore, it is concluded that when using rounded data, all three GP parameters should be assessed. The location estimates should be compatible with the theoretical value of 0.5. Although the present study is performed mainly on the DS series from two stations in Croatia spanning the period of 1961-2010, the authors believe that the methodology developed here is applicable to
Bayesian Calibration of Generalized Pools of Predictive Distributions
Directory of Open Access Journals (Sweden)
Roberto Casarin
2016-03-01
Full Text Available Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Photometric Redshift with Bayesian Priors on Physical Properties of Galaxies
Tanaka, Masayuki
2015-01-01
We present a proof-of-concept analysis of photometric redshifts with Bayesian priors on physical properties of galaxies. This concept is particularly suited for upcoming/on-going large imaging surveys, in which only several broad-band filters are available and it is hard to break some of the degeneracies in the multi-color space. We construct model templates of galaxies using a stellar population synthesis code and apply Bayesian priors on physical properties such as stellar mass and star formation rate. These priors are a function of redshift and they effectively evolve the templates with time in an observationally motivated way. We demonstrate that the priors help reduce the degeneracy and deliver significantly improved photometric redshifts. Furthermore, we show that a template error function, which corrects for systematic flux errors in the model templates as a function of rest-frame wavelength, delivers further improvements. One great advantage of our technique is that we simultaneously measure redshifts...
Efficient variational inference in large-scale Bayesian compressed sensing
Papandreou, George
2011-01-01
We study linear models under heavy-tailed priors from a probabilistic viewpoint. Instead of computing a single sparse most probable (MAP) solution as in standard compressed sensing, the focus in the Bayesian framework shifts towards capturing the full posterior distribution on the latent variables, which allows quantifying the estimation uncertainty and learning model parameters using maximum likelihood. The exact posterior distribution under the sparse linear model is intractable and we concentrate on a number of alternative variational Bayesian techniques to approximate it. Repeatedly computing Gaussian variances turns out to be a key requisite for all these approximations and constitutes the main computational bottleneck in applying variational techniques in large-scale problems. We leverage on the recently proposed Perturb-and-MAP algorithm for drawing exact samples from Gaussian Markov random fields (GMRF). The main technical contribution of our paper is to show that estimating Gaussian variances using a...
Bayesian modeling growth curves for quail assuming skewness in errors
Directory of Open Access Journals (Sweden)
Robson Marcelo Rossi
2014-06-01
Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.
Bayesian compressive sensing for ultrawideband inverse scattering in random media
Fouda, A E
2014-01-01
We develop an ultrawideband (UWB) inverse scattering technique for reconstructing continuous random media based on Bayesian compressive sensing. In addition to providing maximum a posteriori estimates of the unknown weights, Bayesian inversion provides estimate of the confidence level of the solution, as well as a systematic approach for optimizing subsequent measurement(s) to maximize information gain. We impose sparsity priors directly on spatial harmonics to exploit the spatial correlation exhibited by continuous media, and solve for their posterior probability density functions efficiently using a fast relevance vector machine. We linearize the problem using the first-order Born approximation which enables us to combine, in a single inversion, measurements from multiple transmitters and ultrawideband frequencies. We extend the method to high-contrast media using the distorted-Born iterative method. We apply time-reversal strategies to adaptively focus the inversion effort onto subdomains of interest, and ...
Decision Support System for Maintenance Management Using Bayesian Networks
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The maintenance process has undergone several major developments that have led to proactive considerations and the transformation from the traditional "fail and fix" practice into the "predict and prevent" proactive maintenance methodology. The anticipation action, which characterizes this proactive maintenance strategy is mainly based on monitoring, diagnosis, prognosis and decision-making modules. Oil monitoring is a key component of a successful condition monitoring program. It can be used as a proactive tool to identify the wear modes of rubbing parts and diagnoses the faults in machinery. But diagnosis relying on oil analysis technology must deal with uncertain knowledge and fuzzy input data. Besides other methods, Bayesian Networks have been extensively applied to fault diagnosis with the advantages of uncertainty inference; however, in the area of oil monitoring, it is a new field. This paper presents an integrated Bayesian network based decision support for maintenance of diesel engines.
Adaptive Non-Linear Bayesian Filter for ECG Denoising
Directory of Open Access Journals (Sweden)
Mitesh Kumar Sao
2014-06-01
Full Text Available The cycles of an electrocardiogram (ECG signal contain three components P-wave, QRS complex and the T-wave. Noise is present in cardiograph as signals being measured in which biological resources (muscle contraction, base line drift, motion noise and environmental resources (power line interference, electrode contact noise, instrumentation noise are normally pollute ECG signal detected at the electrode. Visu-Shrink thresholding and Bayesian thresholding are the two filters based technique on wavelet method which is denoising the PLI noisy ECG signal. So thresholding techniques are applied for the effectiveness of ECG interval and compared the results with the wavelet soft and hard thresholding methods. The outputs are evaluated by calculating the root mean square (RMS, signal to noise ratio (SNR, correlation coefficient (CC and power spectral density (PSD using MATLAB software. The clean ECG signal shows Bayesian thresholding technique is more powerful algorithm for denoising.
A localization model to localize multiple sources using Bayesian inference
Dunham, Joshua Rolv
Accurate localization of a sound source in a room setting is important in both psychoacoustics and architectural acoustics. Binaural models have been proposed to explain how the brain processes and utilizes the interaural time differences (ITDs) and interaural level differences (ILDs) of sound waves arriving at the ears of a listener in determining source location. Recent work shows that applying Bayesian methods to this problem is proving fruitful. In this thesis, pink noise samples are convolved with head-related transfer functions (HRTFs) and compared to combinations of one and two anechoic speech signals convolved with different HRTFs or binaural room impulse responses (BRIRs) to simulate room positions. Through exhaustive calculation of Bayesian posterior probabilities and using a maximal likelihood approach, model selection will determine the number of sources present, and parameter estimation will result in azimuthal direction of the source(s).
A Bayesian nonlinear mixed-effects disease progression model
Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith
2016-01-01
A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562
Numeracy, frequency, and Bayesian reasoning
Directory of Open Access Journals (Sweden)
Gretchen B. Chapman
2009-02-01
Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.
Bayesian Query-Focused Summarization
Daumé, Hal
2009-01-01
We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
Bayesian Sampling using Condition Indicators
DEFF Research Database (Denmark)
Faber, Michael H.; Sørensen, John Dalsgaard
2002-01-01
The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept...... of condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators...
Bayesian analysis for extreme climatic events: A review
Chu, Pao-Shin; Zhao, Xin
2011-11-01
This article reviews Bayesian analysis methods applied to extreme climatic data. We particularly focus on applications to three different problems related to extreme climatic events including detection of abrupt regime shifts, clustering tropical cyclone tracks, and statistical forecasting for seasonal tropical cyclone activity. For identifying potential change points in an extreme event count series, a hierarchical Bayesian framework involving three layers - data, parameter, and hypothesis - is formulated to demonstrate the posterior probability of the shifts throughout the time. For the data layer, a Poisson process with a gamma distributed rate is presumed. For the hypothesis layer, multiple candidate hypotheses with different change-points are considered. To calculate the posterior probability for each hypothesis and its associated parameters we developed an exact analytical formula, a Markov Chain Monte Carlo (MCMC) algorithm, and a more sophisticated reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The algorithms are applied to several rare event series: the annual tropical cyclone or typhoon counts over the central, eastern, and western North Pacific; the annual extremely heavy rainfall event counts at Manoa, Hawaii; and the annual heat wave frequency in France. Using an Expectation-Maximization (EM) algorithm, a Bayesian clustering method built on a mixture Gaussian model is applied to objectively classify historical, spaghetti-like tropical cyclone tracks (1945-2007) over the western North Pacific and the South China Sea into eight distinct track types. A regression based approach to forecasting seasonal tropical cyclone frequency in a region is developed. Specifically, by adopting large-scale environmental conditions prior to the tropical cyclone season, a Poisson regression model is built for predicting seasonal tropical cyclone counts, and a probit regression model is alternatively developed toward a binary classification problem. With a non
Neural Networks in Control Applications
DEFF Research Database (Denmark)
Sørensen, O.
The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all...... in a recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...
Bayesian models for comparative analysis integrating phylogenetic uncertainty
Directory of Open Access Journals (Sweden)
Villemereuil Pierre de
2012-06-01
Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible
Using Bayesian Networks to Improve Knowledge Assessment
Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra
2013-01-01
In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…
Bayesian analysis of exoplanet and binary orbits
Schulze-Hartung, Tim; Launhardt, Ralf; Henning, Thomas
2012-01-01
We introduce BASE (Bayesian astrometric and spectroscopic exoplanet detection and characterisation tool), a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The capabilities of BASE are demonstrated using all publicly available data of the binary Mizar A.
Bayesian credible interval construction for Poisson statistics
Institute of Scientific and Technical Information of China (English)
ZHU Yong-Sheng
2008-01-01
The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.
Modeling Diagnostic Assessments with Bayesian Networks
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego
2007-01-01
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Advances in Bayesian Modeling in Educational Research
Levy, Roy
2016-01-01
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Learning dynamic Bayesian networks with mixed variables
DEFF Research Database (Denmark)
Bøttcher, Susanne Gammelgaard
This paper considers dynamic Bayesian networks for discrete and continuous variables. We only treat the case, where the distribution of the variables is conditional Gaussian. We show how to learn the parameters and structure of a dynamic Bayesian network and also how the Markov order can be learned...
Bayesian Network for multiple hypthesis tracking
W.P. Zajdel; B.J.A. Kröse
2002-01-01
For a flexible camera-to-camera tracking of multiple objects we model the objects behavior with a Bayesian network and combine it with the multiple hypohesis framework that associates observations with objects. Bayesian networks offer a possibility to factor complex, joint distributions into a produ
Bayesian Nonparametric Clustering for Positive Definite Matrices.
Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2016-05-01
Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838
Bayesian data assimilation in shape registration
Cotter, C J
2013-03-28
In this paper we apply a Bayesian framework to the problem of geodesic curve matching. Given a template curve, the geodesic equations provide a mapping from initial conditions for the conjugate momentum onto topologically equivalent shapes. Here, we aim to recover the well-defined posterior distribution on the initial momentum which gives rise to observed points on the target curve; this is achieved by explicitly including a reparameterization in the formulation. Appropriate priors are chosen for the functions which together determine this field and the positions of the observation points, the initial momentum p0 and the reparameterization vector field ν, informed by regularity results about the forward model. Having done this, we illustrate how maximum likelihood estimators can be used to find regions of high posterior density, but also how we can apply recently developed Markov chain Monte Carlo methods on function spaces to characterize the whole of the posterior density. These illustrative examples also include scenarios where the posterior distribution is multimodal and irregular, leading us to the conclusion that knowledge of a state of global maximal posterior density does not always give us the whole picture, and full posterior sampling can give better quantification of likely states and the overall uncertainty inherent in the problem. © 2013 IOP Publishing Ltd.
Experimental validation of a Bayesian model of visual acuity.
LENUS (Irish Health Repository)
Dalimier, Eugénie
2009-01-01
Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.
Bayesian inference for generalized linear models for spiking neurons
Directory of Open Access Journals (Sweden)
Sebastian Gerwinn
2010-05-01
Full Text Available Generalized Linear Models (GLMs are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
2nd Bayesian Young Statisticians Meeting
Bitto, Angela; Kastner, Gregor; Posekany, Alexandra
2015-01-01
The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...
Stability prediction of berm breakwater using neural network
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Rao, S.; Manjunath, Y.R.
In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...
Wave transmission prediction of multilayer floating breakwater using neural network
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Patil, S.G.; Hegde, A.V.
In the present study, an artificial neural network method has been applied for wave transmission prediction of multilayer floating breakwater. Two neural network models are constructed based on the parameters which influence the wave transmission...
Bayesian refinement of protein functional site matching
Directory of Open Access Journals (Sweden)
Gold Nicola D
2007-07-01
Full Text Available Abstract Background Matching functional sites is a key problem for the understanding of protein function and evolution. The commonly used graph theoretic approach, and other related approaches, require adjustment of a matching distance threshold a priori according to the noise in atomic positions. This is difficult to pre-determine when matching sites related by varying evolutionary distances and crystallographic precision. Furthermore, sometimes the graph method is unable to identify alternative but important solutions in the neighbourhood of the distance based solution because of strict distance constraints. We consider the Bayesian approach to improve graph based solutions. In principle this approach applies to other methods with strict distance matching constraints. The Bayesian method can flexibly incorporate all types of prior information on specific binding sites (e.g. amino acid types in contrast to combinatorial formulations. Results We present a new meta-algorithm for matching protein functional sites (active sites and ligand binding sites based on an initial graph matching followed by refinement using a Markov chain Monte Carlo (MCMC procedure. This procedure is an innovative extension to our recent work. The method accounts for the 3-dimensional structure of the site as well as the physico-chemical properties of the constituent amino acids. The MCMC procedure can lead to a significant increase in the number of significant matches compared to the graph method as measured independently by rigorously derived p-values. Conclusion MCMC refinement step is able to significantly improve graph based matches. We apply the method to matching NAD(P(H binding sites within single Rossmann fold families, between different families in the same superfamily, and in different folds. Within families sites are often well conserved, but there are examples where significant shape based matches do not retain similar amino acid chemistry, indicating that
Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants
Jin, Ick Hoon
2014-03-01
Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.
Henry de-Graft Acquah; Joseph Acquah
2013-01-01
Alternative formulations of the Bayesian Information Criteria provide a basis for choosing between competing methods for detecting price asymmetry. However, very little is understood about their performance in the asymmetric price transmission modelling framework. In addressing this issue, this paper introduces and applies parametric bootstrap techniques to evaluate the ability of Bayesian Information Criteria (BIC) and Draper's Information Criteria (DIC) in discriminating between alternative...
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Tools for investigating the prior distribution in Bayesian hydrology
Tang, Yating; Marshall, Lucy; Sharma, Ashish; Smith, Tyler
2016-07-01
Bayesian inference is one of the most popular tools for uncertainty analysis in hydrological modeling. While much emphasis has been placed on the selection of appropriate likelihood functions within Bayesian hydrology, few researchers have evaluated the importance of the prior distribution in deriving appropriate posterior distributions. This paper describes tools for the evaluation of parameter sensitivity to the prior distribution to provide guidelines for defining meaningful priors. The tools described here consist of two measurements, the Kullback-Leibler Divergence (KLD) and the prior information elasticity. The Kullback-Leibler Divergence (KLD) is applied to calculate differences between the prior and posterior distributions for different cases. The prior information elasticity is then used to quantify the responsiveness of the KLD values to the change of prior distributions and length of available data. The tools are demonstrated via a Bayesian framework using an MCMC algorithm for a conceptual hydrologic model with both synthetic and real cases. The results of the application of this toolkit suggest the prior distribution can have a significant impact on the posterior distribution and should be more routinely assessed in hydrologic studies.
Bayesian networks for evaluation of evidence from forensic entomology.
Andersson, M Gunnar; Sundström, Anders; Lindström, Anders
2013-09-01
In the aftermath of a CBRN incident, there is an urgent need to reconstruct events in order to bring the perpetrators to court and to take preventive actions for the future. The challenge is to discriminate, based on available information, between alternative scenarios. Forensic interpretation is used to evaluate to what extent results from the forensic investigation favor the prosecutors' or the defendants' arguments, using the framework of Bayesian hypothesis testing. Recently, several new scientific disciplines have been used in a forensic context. In the AniBioThreat project, the framework was applied to veterinary forensic pathology, tracing of pathogenic microorganisms, and forensic entomology. Forensic entomology is an important tool for estimating the postmortem interval in, for example, homicide investigations as a complement to more traditional methods. In this article we demonstrate the applicability of the Bayesian framework for evaluating entomological evidence in a forensic investigation through the analysis of a hypothetical scenario involving suspect movement of carcasses from a clandestine laboratory. Probabilities of different findings under the alternative hypotheses were estimated using a combination of statistical analysis of data, expert knowledge, and simulation, and entomological findings are used to update the beliefs about the prosecutors' and defendants' hypotheses and to calculate the value of evidence. The Bayesian framework proved useful for evaluating complex hypotheses using findings from several insect species, accounting for uncertainty about development rate, temperature, and precolonization. The applicability of the forensic statistic approach to evaluating forensic results from a CBRN incident is discussed.
Two-Stage Bayesian Model Averaging in Endogenous Variable Models.
Lenkoski, Alex; Eicher, Theo S; Raftery, Adrian E
2014-01-01
Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471
Bayesian regression model for seasonal forecast of precipitation over Korea
Jo, Seongil; Lim, Yaeji; Lee, Jaeyong; Kang, Hyun-Suk; Oh, Hee-Seok
2012-08-01
In this paper, we apply three different Bayesian methods to the seasonal forecasting of the precipitation in a region around Korea (32.5°N-42.5°N, 122.5°E-132.5°E). We focus on the precipitation of summer season (June-July-August; JJA) for the period of 1979-2007 using the precipitation produced by the Global Data Assimilation and Prediction System (GDAPS) as predictors. Through cross-validation, we demonstrate improvement for seasonal forecast of precipitation in terms of root mean squared error (RMSE) and linear error in probability space score (LEPS). The proposed methods yield RMSE of 1.09 and LEPS of 0.31 between the predicted and observed precipitations, while the prediction using GDAPS output only produces RMSE of 1.20 and LEPS of 0.33 for CPC Merged Analyzed Precipitation (CMAP) data. For station-measured precipitation data, the RMSE and LEPS of the proposed Bayesian methods are 0.53 and 0.29, while GDAPS output is 0.66 and 0.33, respectively. The methods seem to capture the spatial pattern of the observed precipitation. The Bayesian paradigm incorporates the model uncertainty as an integral part of modeling in a natural way. We provide a probabilistic forecast integrating model uncertainty.
A Bayesian Reflection on Surfaces
Directory of Open Access Journals (Sweden)
David R. Wolf
1999-10-01
Full Text Available Abstract: The topic of this paper is a novel Bayesian continuous-basis field representation and inference framework. Within this paper several problems are solved: The maximally informative inference of continuous-basis fields, that is where the basis for the field is itself a continuous object and not representable in a finite manner; the tradeoff between accuracy of representation in terms of information learned, and memory or storage capacity in bits; the approximation of probability distributions so that a maximal amount of information about the object being inferred is preserved; an information theoretic justification for multigrid methodology. The maximally informative field inference framework is described in full generality and denoted the Generalized Kalman Filter. The Generalized Kalman Filter allows the update of field knowledge from previous knowledge at any scale, and new data, to new knowledge at any other scale. An application example instance, the inference of continuous surfaces from measurements (for example, camera image data, is presented.
Quantum Bayesianism at the Perimeter
Fuchs, Christopher A
2010-01-01
The author summarizes the Quantum Bayesian viewpoint of quantum mechanics, developed originally by C. M. Caves, R. Schack, and himself. It is a view crucially dependent upon the tools of quantum information theory. Work at the Perimeter Institute for Theoretical Physics continues the development and is focused on the hard technical problem of a finding a good representation of quantum mechanics purely in terms of probabilities, without amplitudes or Hilbert-space operators. The best candidate representation involves a mysterious entity called a symmetric informationally complete quantum measurement. Contemplation of it gives a way of thinking of the Born Rule as an addition to the rules of probability theory, applicable when one gambles on the consequences of interactions with physical systems. The article ends by outlining some directions for future work.
Hedging Strategies for Bayesian Optimization
Brochu, Eric; de Freitas, Nando
2010-01-01
Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It is able to do this by sampling the objective using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several different parameterized acquisition functions in the literature, and it is often unclear which one to use. Instead of using a single acquisition function, we adopt a portfolio of acquisition functions governed by an online multi-armed bandit strategy. We describe the method, which we call GP-Hedge, and show that this method almost always outperforms the best individual acquisition function.
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
, and exercises are included for the reader to check his/her level of understanding. The techniques and methods presented for knowledge elicitation, model construction and verification, modeling techniques and tricks, learning models from data, and analyses of models have all been developed and refined......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...... primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning under uncertainty. The theory and methods presented are illustrated through more than 140 examples...
Bayesian Networks and Influence Diagrams
DEFF Research Database (Denmark)
Kjærulff, Uffe Bro; Madsen, Anders Læsø
under uncertainty. The theory and methods presented are illustrated through more than 140 examples, and exercises are included for the reader to check his or her level of understanding. The techniques and methods presented on model construction and verification, modeling techniques and tricks, learning......Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...... sections, in addition to fully-updated examples, tables, figures, and a revised appendix. Intended primarily for practitioners, this book does not require sophisticated mathematical skills or deep understanding of the underlying theory and methods nor does it discuss alternative technologies for reasoning...
State Information in Bayesian Games
Cuff, Paul
2009-01-01
Two-player zero-sum repeated games are well understood. Computing the value of such a game is straightforward. Additionally, if the payoffs are dependent on a random state of the game known to one, both, or neither of the players, the resulting value of the game has been analyzed under the framework of Bayesian games. This investigation considers the optimal performance in a game when a helper is transmitting state information to one of the players. Encoding information for an adversarial setting (game) requires a different result than rate-distortion theory provides. Game theory has accentuated the importance of randomization (mixed strategy), which does not find a significant role in most communication modems and source coding codecs. Higher rates of communication, used in the right way, allow the message to include the necessary random component useful in games.
Multiview Bayesian Correlated Component Analysis
DEFF Research Database (Denmark)
Kamronn, Simon Due; Poulsen, Andreas Trier; Hansen, Lars Kai
2015-01-01
Correlated component analysis as proposed by Dmochowski, Sajda, Dias, and Parra (2012) is a tool for investigating brain process similarity in the responses to multiple views of a given stimulus. Correlated components are identified under the assumption that the involved spatial networks...... are identical. Here we propose a hierarchical probabilistic model that can infer the level of universality in such multiview data, from completely unrelated representations, corresponding to canonical correlation analysis, to identical representations as in correlated component analysis. This new model, which...... we denote Bayesian correlated component analysis, evaluates favorably against three relevant algorithms in simulated data. A well-established benchmark EEG data set is used to further validate the new model and infer the variability of spatial representations across multiple subjects....
Elvira, Clément; Dobigeon, Nicolas
2015-01-01
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an $\\ell_{\\infty}$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The seco...
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online. PMID:22523437
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring.
Carroll, Carlos; Johnson, Devin S; Dunk, Jeffrey R; Zielinski, William J
2010-12-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their data's spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and invertebrate taxa of conservation concern (Church's sideband snails [Monadenia churchi], red tree voles [Arborimus longicaudus], and Pacific fishers [Martes pennanti pacifica]) that provide examples of a range of distributional extents and dispersal abilities. We used presence-absence data derived from regional monitoring programs to develop models with both landscape and site-level environmental covariates. We used Markov chain Monte Carlo algorithms and a conditional autoregressive or intrinsic conditional autoregressive model framework to fit spatial models. The fit of Bayesian spatial models was between 35 and 55% better than the fit of nonspatial analogue models. Bayesian spatial models outperformed analogous models developed with maximum entropy (Maxent) methods. Although the best spatial and nonspatial models included similar environmental variables, spatial models provided estimates of residual spatial effects that suggested how ecological processes might structure distribution patterns. Spatial models built from presence-absence data improved fit most for localized endemic species with ranges constrained by poorly known biogeographic factors and for widely distributed species suspected to be strongly affected by unmeasured environmental variables or population processes. By treating spatial effects as a variable of interest rather than a nuisance, hierarchical Bayesian spatial models, especially when they are based on a common broad-scale spatial lattice (here the national Forest Inventory and Analysis grid of 24 km(2) hexagons), can increase the relevance of habitat models to multispecies
Flood alert system based on bayesian techniques
Gulliver, Z.; Herrero, J.; Viesca, C.; Polo, M. J.
2012-04-01
The problem of floods in the Mediterranean regions is closely linked to the occurrence of torrential storms in dry regions, where even the water supply relies on adequate water management. Like other Mediterranean basins in Southern Spain, the Guadalhorce River Basin is a medium sized watershed (3856 km2) where recurrent yearly floods occur , mainly in autumn and spring periods, driven by cold front phenomena. The torrential character of the precipitation in such small basins, with a concentration time of less than 12 hours, produces flash flood events with catastrophic effects over the city of Malaga (600000 inhabitants). From this fact arises the need for specific alert tools which can forecast these kinds of phenomena. Bayesian networks (BN) have been emerging in the last decade as a very useful and reliable computational tool for water resources and for the decision making process. The joint use of Artificial Neural Networks (ANN) and BN have served us to recognize and simulate the two different types of hydrological behaviour in the basin: natural and regulated. This led to the establishment of causal relationships between precipitation, discharge from upstream reservoirs, and water levels at a gauging station. It was seen that a recurrent ANN model working at an hourly scale, considering daily precipitation and the two previous hourly values of reservoir discharge and water level, could provide R2 values of 0.86. BN's results slightly improve this fit, but contribute with uncertainty to the prediction. In our current work to Design a Weather Warning Service based on Bayesian techniques the first steps were carried out through an analysis of the correlations between the water level and rainfall at certain representative points in the basin, along with the upstream reservoir discharge. The lower correlation found between precipitation and water level emphasizes the highly regulated condition of the stream. The autocorrelations of the variables were also
Bayesian models a statistical primer for ecologists
Hobbs, N Thompson
2015-01-01
Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...... by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose PRIMULA--generated propositional instances have thousands of variables, and whose jointrees have clusters...
DIAMONDS: A new Bayesian nested sampling tool
Corsaro, Enrico
2015-01-01
In the context of high-quality asteroseismic data provided by the NASA Kepler mission, we developed a new code, termed Diamonds (high-DImensional And multi-MOdal NesteD Sampling), for fast Bayesian parameter estimation and model comparison by means of the Nested Sampling Monte Carlo (NSMC) algorithm, an efficient and powerful method very suitable for high-dimensional problems (like the peak bagging analysis of solar-like oscillations) and multi-modal problems (i.e. problems that show multiple solutions). We applied the code to the peak bagging analysis of solar-like oscillations observed in a challenging F-type star. By means of Diamonds one is able to detect the different backgrounds in the power spectrum of the star (e.g. stellar granulation and faculae activity) and to understand whether one or two oscillation peaks can be identified or not. In addition, we demonstrate a novel approach to peak bagging based on multimodality, which is able to reduce significantly the number of free parameters involved in th...
Bayesian object classification of gold nanoparticles
Konomi, Bledar A.
2013-06-01
The properties of materials synthesized with nanoparticles (NPs) are highly correlated to the sizes and shapes of the nanoparticles. The transmission electron microscopy (TEM) imaging technique can be used to measure the morphological characteristics of NPs, which can be simple circles or more complex irregular polygons with varying degrees of scales and sizes. A major difficulty in analyzing the TEM images is the overlapping of objects, having different morphological properties with no specific information about the number of objects present. Furthermore, the objects lying along the boundary render automated image analysis much more difficult. To overcome these challenges, we propose a Bayesian method based on the marked-point process representation of the objects. We derive models, both for the marks which parameterize the morphological aspects and the points which determine the location of the objects. The proposed model is an automatic image segmentation and classification procedure, which simultaneously detects the boundaries and classifies the NPs into one of the predetermined shape families. We execute the inference by sampling the posterior distribution using Markov chainMonte Carlo (MCMC) since the posterior is doubly intractable. We apply our novel method to several TEM imaging samples of gold NPs, producing the needed statistical characterization of their morphology. © Institute of Mathematical Statistics, 2013.
Directory of Open Access Journals (Sweden)
Ildikó Ungvári
Full Text Available Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls. The results were evaluated with traditional frequentist methods and we applied a new statistical method, called bayesian network based bayesian multilevel analysis of relevance (BN-BMLA. This method uses bayesian network representation to provide detailed characterization of the relevance of factors, such as joint significance, the type of dependency, and multi-target aspects. We estimated posteriors for these relations within the bayesian statistical framework, in order to estimate the posteriors whether a variable is directly relevant or its association is only mediated.With frequentist methods one SNP (rs3751464 in the FRMD6 gene provided evidence for an association with asthma (OR = 1.43(1.2-1.8; p = 3×10(-4. The possible role of the FRMD6 gene in asthma was also confirmed in an animal model and human asthmatics.In the BN-BMLA analysis altogether 5 SNPs in 4 genes were found relevant in connection with asthma phenotype: PRPF19 on chromosome 11, and FRMD6, PTGER2 and PTGDR on chromosome 14. In a subsequent step a partial dataset containing rhinitis and further clinical parameters was used, which allowed the analysis of relevance of SNPs for asthma and multiple targets. These analyses suggested that SNPs in the AHNAK and MS4A2 genes were indirectly associated with asthma. This paper indicates that BN-BMLA explores the relevant factors more comprehensively than traditional statistical methods and extends the scope of strong relevance based methods to include partial relevance, global characterization of relevance and multi-target relevance.
Handbook on neural information processing
Maggini, Marco; Jain, Lakhmi
2013-01-01
This handbook presents some of the most recent topics in neural information processing, covering both theoretical concepts and practical applications. The contributions include: Deep architectures Recurrent, recursive, and graph neural networks Cellular neural networks Bayesian networks Approximation capabilities of neural networks Semi-supervised learning Statistical relational learning Kernel methods for structured data Multiple classifier systems Self organisation and modal learning Applications to ...
Computational modeling of neural activities for statistical inference
Kolossa, Antonio
2016-01-01
This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .
The Diagnosis of Reciprocating Machinery by Bayesian Networks
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A Bayesian Network is a reasoning tool based on probability theory and has many advantages that other reasoning tools do not have. This paper discusses the basic theory of Bayesian networks and studies the problems in constructing Bayesian networks. The paper also constructs a Bayesian diagnosis network of a reciprocating compressor. The example helps us to draw a conclusion that Bayesian diagnosis networks can diagnose reciprocating machinery effectively.
Gupta, S; Gupta, Sanjay
2002-01-01
This paper initiates the study of quantum computing within the constraints of using a polylogarithmic ($O(\\log^k n), k\\geq 1$) number of qubits and a polylogarithmic number of computation steps. The current research in the literature has focussed on using a polynomial number of qubits. A new mathematical model of computation called \\emph{Quantum Neural Networks (QNNs)} is defined, building on Deutsch's model of quantum computational network. The model introduces a nonlinear and irreversible gate, similar to the speculative operator defined by Abrams and Lloyd. The precise dynamics of this operator are defined and while giving examples in which nonlinear Schr\\"{o}dinger's equations are applied, we speculate on its possible implementation. The many practical problems associated with the current model of quantum computing are alleviated in the new model. It is shown that QNNs of logarithmic size and constant depth have the same computational power as threshold circuits, which are used for modeling neural network...
Bayesian Uncertainty Analyses Via Deterministic Model
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
Learning Bayesian networks for discrete data
Liang, Faming
2009-02-01
Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Bayesian Control for Concentrating Mixed Nuclear Waste
Welch, Robert L.; Smith, Clayton
2013-01-01
A control algorithm for batch processing of mixed waste is proposed based on conditional Gaussian Bayesian networks. The network is compiled during batch staging for real-time response to sensor input.
An Intuitive Dashboard for Bayesian Network Inference
International Nuclear Information System (INIS)
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++
An Intuitive Dashboard for Bayesian Network Inference
Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.
2014-03-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Directory of Open Access Journals (Sweden)
Schwindling Jerome
2010-04-01
Full Text Available This course presents an overview of the concepts of the neural networks and their aplication in the framework of High energy physics analyses. After a brief introduction on the concept of neural networks, the concept is explained in the frame of neuro-biology, introducing the concept of multi-layer perceptron, learning and their use as data classifer. The concept is then presented in a second part using in more details the mathematical approach focussing on typical use cases faced in particle physics. Finally, the last part presents the best way to use such statistical tools in view of event classifers, putting the emphasis on the setup of the multi-layer perceptron. The full article (15 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.
A Bayesian Network View on Nested Effects Models
Directory of Open Access Journals (Sweden)
Fröhlich Holger
2009-01-01
Full Text Available Nested effects models (NEMs are a class of probabilistic models that were designed to reconstruct a hidden signalling structure from a large set of observable effects caused by active interventions into the signalling pathway. We give a more flexible formulation of NEMs in the language of Bayesian networks. Our framework constitutes a natural generalization of the original NEM model, since it explicitly states the assumptions that are tacitly underlying the original version. Our approach gives rise to new learning methods for NEMs, which have been implemented in the /Bioconductor package nem. We validate these methods in a simulation study and apply them to a synthetic lethality dataset in yeast.
Bayesian estimation of keyword confidence in Chinese continuous speech recognition
Institute of Scientific and Technical Information of China (English)
HAO Jie; LI Xing
2003-01-01
In a syllable-based speaker-independent Chinese continuous speech recognition system based on classical Hidden Markov Model (HMM), a Bayesian approach of keyword confidence estimation is studied, which utilizes both acoustic layer scores and syllable-based statistical language model (LM) score. The Maximum a posteriori (MAP) confidence measure is proposed, and the forward-backward algorithm calculating the MAP confidence scores is deduced. The performance of the MAP confidence measure is evaluated in keyword spotting application and the experiment results show that the MAP confidence scores provide high discriminability for keyword candidates. Furthermore, the MAP confidence measure can be applied to various speech recognition applications.
A Bayesian approach to simultaneously quantify assignments and linguistic uncertainty
Energy Technology Data Exchange (ETDEWEB)
Chavez, Gregory M [Los Alamos National Laboratory; Booker, Jane M [BOOKER SCIENTIFIC FREDERICKSBURG; Ross, Timothy J [UNM
2010-10-07
Subject matter expert assessments can include both assignment and linguistic uncertainty. This paper examines assessments containing linguistic uncertainty associated with a qualitative description of a specific state of interest and the assignment uncertainty associated with assigning a qualitative value to that state. A Bayesian approach is examined to simultaneously quantify both assignment and linguistic uncertainty in the posterior probability. The approach is applied to a simplified damage assessment model involving both assignment and linguistic uncertainty. The utility of the approach and the conditions under which the approach is feasible are examined and identified.
CFAR Detection from Noncoherent Radar Echoes Using Bayesian Theory
Directory of Open Access Journals (Sweden)
Wataru Suganuma
2010-01-01
Full Text Available We propose a new constant false alarm rate (CFAR detection method from noncoherent radar echoes, considering heterogeneous sea clutter. It applies the Bayesian theory for adaptive estimation of the local clutter statistical distribution in the cell under test. The detection technique can be readily implemented in existing noncoherent marine radar systems, which makes it particularly attractive for economical CFAR detection systems. Monte Carlo simulations were used to investigate the detection performance and demonstrated that the proposed technique provides a higher probability of detection than conventional techniques, such as cell averaging CFAR (CA-CFAR, especially with a small number of reference cells.
He, Bin
About the Series: Bioelectric Engineering presents state-of-the-art discussions on modern biomedical engineering with respect to applications of electrical engineering and information technology in biomedicine. This focus affirms Springer's commitment to publishing important reviews of the broadest interest to biomedical engineers, bioengineers, and their colleagues in affiliated disciplines. Recent volumes have covered modeling and imaging of bioelectric activity, neural engineering, biosignal processing, bionanotechnology, among other topics.
Nomograms for Visualization of Naive Bayesian Classifier
Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz
2004-01-01
Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...
Subjective Bayesian Analysis: Principles and Practice
Goldstein, Michael
2006-01-01
We address the position of subjectivism within Bayesian statistics. We argue, first, that the subjectivist Bayes approach is the only feasible method for tackling many important practical problems. Second, we describe the essential role of the subjectivist approach in scientific analysis. Third, we consider possible modifications to the Bayesian approach from a subjectivist viewpoint. Finally, we address the issue of pragmatism in implementing the subjectivist approach.
Fitness inheritance in the Bayesian optimization algorithm
Pelikan, Martin; Sastry, Kumara
2004-01-01
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions...
Kernel Bayesian Inference with Posterior Regularization
Song, Yang; Jun ZHU; Ren, Yong
2016-01-01
We propose a vector-valued regression problem whose solution is equivalent to the reproducing kernel Hilbert space (RKHS) embedding of the Bayesian posterior distribution. This equivalence provides a new understanding of kernel Bayesian inference. Moreover, the optimization problem induces a new regularization for the posterior embedding estimator, which is faster and has comparable performance to the squared regularization in kernel Bayes' rule. This regularization coincides with a former th...
Bayesian Variable Selection in Spatial Autoregressive Models
Jesus Crespo Cuaresma; Philipp Piribauer
2015-01-01
This paper compares the performance of Bayesian variable selection approaches for spatial autoregressive models. We present two alternative approaches which can be implemented using Gibbs sampling methods in a straightforward way and allow us to deal with the problem of model uncertainty in spatial autoregressive models in a flexible and computationally efficient way. In a simulation study we show that the variable selection approaches tend to outperform existing Bayesian model averaging tech...
Fuzzy Functional Dependencies and Bayesian Networks
Institute of Scientific and Technical Information of China (English)
LIU WeiYi(刘惟一); SONG Ning(宋宁)
2003-01-01
Bayesian networks have become a popular technique for representing and reasoning with probabilistic information. The fuzzy functional dependency is an important kind of data dependencies in relational databases with fuzzy values. The purpose of this paper is to set up a connection between these data dependencies and Bayesian networks. The connection is done through a set of methods that enable people to obtain the most information of independent conditions from fuzzy functional dependencies.
Bayesian Models of Brain and Behaviour
Penny, William
2012-01-01
This paper presents a review of Bayesian models of brain and behaviour. We first review the basic principles of Bayesian inference. This is followed by descriptions of sampling and variational methods for approximate inference, and forward and backward recursions in time for inference in dynamical models. The review of behavioural models covers work in visual processing, sensory integration, sensorimotor integration, and collective decision making. The review of brain models covers a range of...
Bayesian Modeling of a Human MMORPG Player
Synnaeve, Gabriel
2010-01-01
This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.
Bayesian Modeling of a Human MMORPG Player
Synnaeve, Gabriel; Bessière, Pierre
2011-03-01
This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.
Heterogeneous scaffold designs for selective neural regeneration
Wieringa, P.A.
2014-01-01
Over the past 5 decades, there has been a drive to apply technology to enhance neural regeneration in order to improve patient recovery after disease or injury. This has evolved into the field of Neural Engineering, with the aim to understand, control and exploit the development and function of neur
Self-organization of neural networks
Energy Technology Data Exchange (ETDEWEB)
Clark, J.W.; Winston, J.V.; Rafelski, J.
1984-05-14
The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (brainwashing) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conducive to the simulation of memory and learning phenomena. 18 references, 2 figures.
Self-organization of neural networks
Clark, John W.; Winston, Jeffrey V.; Rafelski, Johann
1984-05-01
The plastic development of a neural-network model operating autonomously in discrete time is described by the temporal modification of interneuronal coupling strengths according to momentary neural activity. A simple algorithm (“brainwashing”) is found which, applied to nets with initially quasirandom connectivity, leads to model networks with properties conductive to the simulation of memory and learning phenomena.
Directory of Open Access Journals (Sweden)
Hassana Maigary Georges
2015-01-01
Full Text Available Among the inertial navigation system (INS devices used in land vehicle navigation (LVN, low-cost microelectromechanical systems (MEMS inertial sensors have received more interest for bridging global navigation satellites systems (GNSS signal failures because of their price and portability. Kalman filter (KF based GNSS/INS integration has been widely used to provide a robust solution to the navigation. However, its prediction model cannot give satisfactory results in the presence of colored and variational noise. In order to achieve reliable and accurate positional solution for LVN in urban areas surrounded by skyscrapers or under dense foliage and tunnels, a novel model combining variational Bayesian adaptive Kalman smoother (VB-ACKS as an alternative of KF and ensemble regularized extreme learning machine (ERELM for bridging global positioning systems outages is proposed. The ERELM is applied to reduce the fluctuating performance of GNSS during an outage. We show that a well-organized collection of predictors using ensemble learning yields a more accurate positional result when compared with conventional artificial neural network (ANN predictors. Experimental results show that the performance of VB-ACKS is more robust compared with KF solution, and the prediction of ERELM contains the smallest error compared with other ANN solutions.
Directory of Open Access Journals (Sweden)
Dongsheng Chen
2016-01-01
Full Text Available Accurate biomass estimations are important for assessing and monitoring forest carbon storage. Bayesian theory has been widely applied to tree biomass models. Recently, a hierarchical Bayesian approach has received increasing attention for improving biomass models. In this study, tree biomass data were obtained by sampling 310 trees from 209 permanent sample plots from larch plantations in six regions across China. Non-hierarchical and hierarchical Bayesian approaches were used to model allometric biomass equations. We found that the total, root, stem wood, stem bark, branch and foliage biomass model relationships were statistically significant (p-values < 0.001 for both the non-hierarchical and hierarchical Bayesian approaches, but the hierarchical Bayesian approach increased the goodness-of-fit statistics over the non-hierarchical Bayesian approach. The R2 values of the hierarchical approach were higher than those of the non-hierarchical approach by 0.008, 0.018, 0.020, 0.003, 0.088 and 0.116 for the total tree, root, stem wood, stem bark, branch and foliage models, respectively. The hierarchical Bayesian approach significantly improved the accuracy of the biomass model (except for the stem bark and can reflect regional differences by using random parameters to improve the regional scale model accuracy.
Directory of Open Access Journals (Sweden)
Archana Venkataraman
2015-01-01
Full Text Available Resting-state functional magnetic resonance imaging (rsfMRI studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD. Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of “language” and “comprehension” as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes “social” and “person”. The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder.
Venkataraman, Archana; Duncan, James S; Yang, Daniel Y-J; Pelphrey, Kevin A
2015-01-01
Resting-state functional magnetic resonance imaging (rsfMRI) studies reveal a complex pattern of hyper- and hypo-connectivity in children with autism spectrum disorder (ASD). Whereas rsfMRI findings tend to implicate the default mode network and subcortical areas in ASD, task fMRI and behavioral experiments point to social dysfunction as a unifying impairment of the disorder. Here, we leverage a novel Bayesian framework for whole-brain functional connectomics that aggregates population differences in connectivity to localize a subset of foci that are most affected by ASD. Our approach is entirely data-driven and does not impose spatial constraints on the region foci or dictate the trajectory of altered functional pathways. We apply our method to data from the openly shared Autism Brain Imaging Data Exchange (ABIDE) and pinpoint two intrinsic functional networks that distinguish ASD patients from typically developing controls. One network involves foci in the right temporal pole, left posterior cingulate cortex, left supramarginal gyrus, and left middle temporal gyrus. Automated decoding of this network by the Neurosynth meta-analytic database suggests high-level concepts of "language" and "comprehension" as the likely functional correlates. The second network consists of the left banks of the superior temporal sulcus, right posterior superior temporal sulcus extending into temporo-parietal junction, and right middle temporal gyrus. Associated functionality of these regions includes "social" and "person". The abnormal pathways emanating from the above foci indicate that ASD patients simultaneously exhibit reduced long-range or inter-hemispheric connectivity and increased short-range or intra-hemispheric connectivity. Our findings reveal new insights into ASD and highlight possible neural mechanisms of the disorder.
Inherently stochastic spiking neurons for probabilistic neural computation
Al-Shedivat, Maruan
2015-04-01
Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.
Inventory control of spare parts using a Bayesian approach: a case study
K-P. Aronis; I. Magou (Ioulia); R. Dekker (Rommert); G. Tagaras (George)
1999-01-01
textabstractThis paper presents a case study of applying a Bayesian approach to forecast demand and subsequently determine the appropriate parameter S of an (S-1,S) inventory system for controlling spare parts of electronic equipment. First, the problem and the current policy are described. Then, t
Historical developments in Bayesian econometrics after Cowles Foundation Monographs 10, 14
N. Basturk; C. Cakmakli; S.P. Ceyhan; H.K. van Dijk
2013-01-01
After a brief description of the first Bayesian steps into econometrics in the 1960s and early 70s, publication and citation patterns are analyzed in ten major econometric journals until 2012. The results indicate that journals which contain both theoretical and applied papers, such as Journal of Ec
Bayesian Integration of Large Scale SNA Data Frameworks with an Application to Guatemala
Van Tongeren, J.W.; Magnus, J.R.
2011-01-01
We present a Bayesian estimation method applied to an extended set of national accounts data and estimates of approximately 2500 variables. The method is based on conventional national accounts frameworks as compiled by countries in Central America, in particular Guatemala, and on concepts that are
AGM-consistency and perfect Bayesian equilibrium. Part I: Definition and properties
Bonanno, Giacomo
2010-01-01
We provide a general notion of perfect Bayesian equilibrium which can be applied to arbitrary extensive-form games and is intermediate between subgame-perfect equilibrium and sequential equilibrium. The essential ingredient of the proposed definition is the qualitative notion of AGM-consistency, which has an epistemic justification based on the AGM theory of belief revision.
Festa, Roberto
1992-01-01
According to the Bayesian view, scientific hypotheses must be appraised in terms of their posterior probabilities relative to the available experimental data. Such posterior probabilities are derived from the prior probabilities of the hypotheses by applying Bayes'theorem. One of the most important
DEFF Research Database (Denmark)
Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans
2013-01-01
when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...
Pan, Yilin
2016-01-01
Given the necessity to bridge the gap between what happened and what is likely to happen, this paper aims to explore how to apply Bayesian inference to cost-effectiveness analysis so as to capture the uncertainty of a ratio-type efficiency measure. The first part of the paper summarizes the characteristics of the evaluation data that are commonly…
Gerven, M.A.J. van
2007-01-01
This dissertation deals with decision support in the context of clinical oncology. (Dynamic) Bayesian networks are used as a framework for (dynamic) decision-making under uncertainty and applied to a variety of diagnostic, prognostic, and treatment problems in medicine. It is shown that the proposed
Institute of Scientific and Technical Information of China (English)
张瑞成; 李冲
2011-01-01
关于优化神经网络模型的快速性和精度,为了寻找最优的神经网络结构,在复杂网络的研究方法对多层前向神经网络模型的基础上,提出一种在层次结构上处于规则型到随机型神经网络过渡的中间网络模型-NW型多层前向小世界人工神经网络模型.利用对多层前向规则神经网络中神经元以某一概率p随机化向后层跨层连接,构建新的神经网络模型,然后将不同跨层概率下的小世界人工神经网络应用于函数逼近.在设定精度相同情况下对不同概率下的收敛次数做比较,仿真发现随机化加边概率p处于p =0.08附近时的小世界人工神经网络比同规模的规则网络和随机网络具有更好的收敛速度,实验证实采用NW型小世界多层前向人工神经网络模型,在精度和收敛速度上均得到提高.%To find the optimal neural network structure, based on the research methods from the complex network , the structure of multi - layer forward neural networks model was studied, and a new neural networks model, NW multi-layer forward small world artificial neural networks was proposed, whose structure of layer was between the regular model and the stochastic model. At first, the regular of multilayer feed -forward neural network neurons randomized cross-layer link back layer with a probability p, and constructed the new neural network model. Second-ly, the cross -layer small world artificial neural networks were used for function approximation under different re-wiring probability. The count of convergence under different probability was compared by setting a same precision. Simulation shows that the small-world neural network has a better convergence speed than regular network and random network nearly p = 0. 08, and the optimum performance of the NW multi-layer forward small world artificial neural network is proved in the right side of probability increases.
Automatic classification of eclipsing binaries light curves using neural networks
Sarro, L M; Giménez, A
2005-01-01
In this work we present a system for the automatic classification of the light curves of eclipsing binaries. This system is based on a classification scheme that aims to separate eclipsing binary sistems according to their geometrical configuration in a modified version of the traditional classification scheme. The classification is performed by a Bayesian ensemble of neural networks trained with {\\em Hipparcos} data of seven different categories including eccentric binary systems and two types of pulsating light curve morphologies.
Bayesian analysis of cosmic structures
Kitaura, Francisco-Shu
2011-01-01
We revise the Bayesian inference steps required to analyse the cosmological large-scale structure. Here we make special emphasis in the complications which arise due to the non-Gaussian character of the galaxy and matter distribution. In particular we investigate the advantages and limitations of the Poisson-lognormal model and discuss how to extend this work. With the lognormal prior using the Hamiltonian sampling technique and on scales of about 4 h^{-1} Mpc we find that the over-dense regions are excellent reconstructed, however, under-dense regions (void statistics) are quantitatively poorly recovered. Contrary to the maximum a posteriori (MAP) solution which was shown to over-estimate the density in the under-dense regions we obtain lower densities than in N-body simulations. This is due to the fact that the MAP solution is conservative whereas the full posterior yields samples which are consistent with the prior statistics. The lognormal prior is not able to capture the full non-linear regime at scales ...
Bayesian analysis of volcanic eruptions
Ho, Chih-Hsiang
1990-10-01
The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.
Bayesian demography 250 years after Bayes.
Bijak, Jakub; Bryant, John
2016-01-01
Bayesian statistics offers an alternative to classical (frequentist) statistics. It is distinguished by its use of probability distributions to describe uncertain quantities, which leads to elegant solutions to many difficult statistical problems. Although Bayesian demography, like Bayesian statistics more generally, is around 250 years old, only recently has it begun to flourish. The aim of this paper is to review the achievements of Bayesian demography, address some misconceptions, and make the case for wider use of Bayesian methods in population studies. We focus on three applications: demographic forecasts, limited data, and highly structured or complex models. The key advantages of Bayesian methods are the ability to integrate information from multiple sources and to describe uncertainty coherently. Bayesian methods also allow for including additional (prior) information next to the data sample. As such, Bayesian approaches are complementary to many traditional methods, which can be productively re-expressed in Bayesian terms. PMID:26902889