WorldWideScience

Sample records for applying bayesian neural

  1. Applying Hierarchical Bayesian Neural Network in Failure Time Prediction

    Directory of Open Access Journals (Sweden)

    Ling-Jing Kao

    2012-01-01

    Full Text Available With the rapid technology development and improvement, the product failure time prediction becomes an even harder task because only few failures in the product life tests are recorded. The classical statistical model relies on the asymptotic theory and cannot guarantee that the estimator has the finite sample property. To solve this problem, we apply the hierarchical Bayesian neural network (HBNN approach to predict the failure time and utilize the Gibbs sampler of Markov chain Monte Carlo (MCMC to estimate model parameters. In this proposed method, the hierarchical structure is specified to study the heterogeneity among products. Engineers can use the heterogeneity estimates to identify the causes of the quality differences and further enhance the product quality. In order to demonstrate the effectiveness of the proposed hierarchical Bayesian neural network model, the prediction performance of the proposed model is evaluated using multiple performance measurement criteria. Sensitivity analysis of the proposed model is also conducted using different number of hidden nodes and training sample sizes. The result shows that HBNN can provide not only the predictive distribution but also the heterogeneous parameter estimates for each path.

  2. Bayesian Neural Word Embedding

    OpenAIRE

    Barkan, Oren

    2016-01-01

    Recently, several works in the domain of natural language processing presented successful methods for word embedding. Among them, the Skip-gram (SG) with negative sampling, known also as Word2Vec, advanced the state-of-the-art of various linguistics tasks. In this paper, we propose a scalable Bayesian neural word embedding algorithm that can be beneficial to general item similarity tasks as well. The algorithm relies on a Variational Bayes solution for the SG objective and a detailed step by ...

  3. Bayesian model selection applied to artificial neural networks used for water resources modeling

    Science.gov (United States)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  4. Bayesian modeling and classification of neural signals

    OpenAIRE

    Lewicki, Michael S.

    1994-01-01

    Signal processing and classification algorithms often have limited applicability resulting from an inaccurate model of the signal's underlying structure. We present here an efficient, Bayesian algorithm for modeling a signal composed of the superposition of brief, Poisson-distributed functions. This methodology is applied to the specific problem of modeling and classifying extracellular neural waveforms which are composed of a superposition of an unknown number of action potentials CAPs). ...

  5. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  6. Option Pricing Using Bayesian Neural Networks

    CERN Document Server

    Pires, Michael Maio

    2007-01-01

    Options have provided a field of much study because of the complexity involved in pricing them. The Black-Scholes equations were developed to price options but they are only valid for European styled options. There is added complexity when trying to price American styled options and this is why the use of neural networks has been proposed. Neural Networks are able to predict outcomes based on past data. The inputs to the networks here are stock volatility, strike price and time to maturity with the output of the network being the call option price. There are two techniques for Bayesian neural networks used. One is Automatic Relevance Determination (for Gaussian Approximation) and one is a Hybrid Monte Carlo method, both used with Multi-Layer Perceptrons.

  7. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

    OpenAIRE

    Hernández-Lobato, José Miguel; Adams, Ryan P.

    2015-01-01

    Large multilayer neural networks trained with backpropagation have recently achieved state-of-the-art results in a wide range of problems. However, using backprop for neural net learning still has some disadvantages, e.g., having to tune a large number of hyperparameters to the data, lack of calibrated probabilistic predictions, and a tendency to overfit the training data. In principle, the Bayesian approach to learning neural networks does not have these problems. However, existing Bayesian ...

  8. Bayesian estimation applied to multiple species

    International Nuclear Information System (INIS)

    Observed data are often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian estimation applied to multiple species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being ''pure'' is known. We discuss the application of BEAMS to future type-Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae light curves without spectra. The multiband light curves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, , of being Ia, BEAMS delivers parameter constraints equal to N spectroscopically confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. the type-Ibc and II distributions). Hence BEAMS provides a unified classification and parameter estimation methodology which may be useful in a diverse range of problems such as photometric redshift estimation or, indeed, any parameter estimation problem where contamination is an issue

  9. Nuclear charge radii: Density functional theory meets Bayesian neural networks

    CERN Document Server

    Utama, Raditya; Piekarewicz, Jorge

    2016-01-01

    The distribution of electric charge in atomic nuclei is fundamental to our understanding of the complex nuclear dynamics and a quintessential observable to validate nuclear structure models. We explore a novel approach that combines sophisticated models of nuclear structure with Bayesian neural networks (BNN) to generate predictions for the charge radii of thousands of nuclei throughout the nuclear chart. A class of relativistic energy density functionals is used to provide robust predictions for nuclear charge radii. In turn, these predictions are refined through Bayesian learning for a neural network that is trained using residuals between theoretical predictions and the experimental data. Although predictions obtained with density functional theory provide a fairly good description of experiment, our results show significant improvement (better than 40%) after BNN refinement. Moreover, these improved results for nuclear charge radii are supplemented with theoretical error bars. We have successfully demonst...

  10. Bayesian Methods for Neural Networks and Related Models

    OpenAIRE

    Titterington, D.M.

    2004-01-01

    Models such as feed-forward neural networks and certain other structures investigated in the computer science literature are not amenable to closed-form Bayesian analysis. The paper reviews the various approaches taken to overcome this difficulty, involving the use of Gaussian approximations, Markov chain Monte Carlo simulation routines and a class of non-Gaussian but “deterministic” approximations called variational approximations.

  11. Recurrent Bayesian Reasoning in Probabilistic Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Grim, Jiří; Hora, Jan

    Vol. Part I. Berlin: Springer, 2007 - (Marques de Sá, J.; Alexandre, L.; Duch, W.; Mandic, D.), s. 129-138. (Lecture Notes in Computer Scinece. SL 1 - Theoretical Computer Science and General Issues. 4669). ISBN 3-540-74693-5. [International Conference on Artificial Neural Networks /17./. Porto (PT), 09.09.2007-13.09.2007] R&D Projects: GA MŠk 1M0572; GA ČR GA102/07/1594 EU Projects: European Commission(XE) 507752 - MUSCLE Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : neural networks * probabilistic approach * distribution mixtures Subject RIV: BD - Theory of Information

  12. Bayesian and neural networks for preliminary ship design

    DEFF Research Database (Denmark)

    Clausen, H. B.; Lützen, Marie; Friis-Hansen, Andreas; Bjørneboe, Nanna Katrine

    2001-01-01

    examples, the three methods are evaluated in terms of accuracy and limitations of use. For different types of ships, the methods provide information on the relations between length, breadth, height, draft, speed, displacement, block coefficient and loading capacity. Thus, useful tools are available to the...... 000 ships is acquired and various methods for derivation of empirical relations are employed. A regression analysis is carried out to fit functions to the data. Further, the data are used to learn Bayesian and neural networks to encode the relations between the characteristics. On the basis of...

  13. Introduction to applied Bayesian statistics and estimation for social scientists

    CERN Document Server

    Lynch, Scott M

    2007-01-01

    ""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed

  14. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  15. A novel Bayesian learning method for information aggregation in modular neural networks

    DEFF Research Database (Denmark)

    Wang, Pan; Xu, Lida; Zhou, Shang-Ming; Fan, Zhun; Li, Youfeng; Feng, Shan

    2010-01-01

    Modular neural network is a popular neural network model which has many successful applications. In this paper, a sequential Bayesian learning (SBL) is proposed for modular neural networks aiming at efficiently aggregating the outputs of members of the ensemble. The experimental results on eight ...... benchmark problems have demonstrated that the proposed method can perform information aggregation efficiently in data modeling....

  16. Evidence for single top quark production using Bayesian neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Kau, Daekwang; /Florida State U.

    2007-08-01

    We present results of a search for single top quark production in p{bar p} collisions using a dataset of approximately 1 fb{sup -1} collected with the D0 detector. This analysis considers the muon+jets and electron+jets final states and makes use of Bayesian neural networks to separate the expected signals from backgrounds. The observed excess is associated with a p-value of 0.081%, assuming the background-only hypothesis, which corresponds to an excess over background of 3.2 standard deviations for a Gaussian density. The p-value computed using the SM signal cross section of 2.9 pb is 1.6%, corresponding to an expected significance of 2.2 standard deviations. Assuming the observed excess is due to single top production, we measure a single top quark production cross section of {sigma}(p{bar p} {yields} tb + X, tqb + X) = 4.4 {+-} 1.5 pb.

  17. Search for predictive generic model of aqueous solubility using Bayesian neural nets.

    Science.gov (United States)

    Bruneau, P

    2001-01-01

    Several predictive models of aqueous solubility have been published. They have good performances on the data sets which have been used for training the models, but usually these data sets do not contain many structures similar to the structures of interest to the drug research and their applicability in drug hunting is questionable. A very diverse data set has been gathered with compounds issued from literature reports and proprietary compounds. These compounds have been grouped in a so-called literature data set I, a proprietary data set II, and a mixed data set III formed by I and II. About 100 descriptors emphasizing surface properties were calculated for every compound. Bayesian learning of neural nets which cumulates the advantages of neural nets without having their weaknesses was used to select the most parsimonious models and train them, from I, II, and III. The models were established by either selecting the most efficient descriptors one by one using a modified Gram-Schmidt procedure (GS) or by simplifying a most complete model using automatic relevance procedure (ARD). The predictive ability of the models was accessed using validation data sets as much unrelated to the training sets as possible, using two new parameters: NDD(x,ref) the normalized smallest descriptor distance of a compound x to a reference data set and CD(x,mod) the combination of NDD(x,ref) with the dispersion of the Bayesian neural nets calculations. The results show that it is possible to obtain a generic predictive model from database I but that the diversity of database II is too restricted to give a model with good generalization ability and that the ARD method applied to the mixed database III gives the best predictive model. PMID:11749587

  18. Advanced Neural Network Applied In Engineering Science

    Directory of Open Access Journals (Sweden)

    Nikita Patel*

    2014-11-01

    Full Text Available The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a humanlike way. The amazing thing about a neural network is that you don't have to program it to learn explicitly: it learns all by itself, just like a brain! But it isn't a brain. It's important to note that neural networks are (generally software simulations: they're made by programming very ordinary computers, working in a very traditional fashion with their ordinary transistors and serially connected logic gates, to behave as though they're built from billions of highly interconnected brain cells working in parallel. This paper is to propose that a neural network applied in engineering science that how a robots that can see, feel, and predict the world around them, improved stock prediction, common usage of self-driving car and much more!

  19. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  20. Current trends in Bayesian methodology with applications

    CERN Document Server

    Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia

    2015-01-01

    Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on

  1. Bayesian estimation inherent in a Mexican-hat-type neural network

    Science.gov (United States)

    Takiyama, Ken

    2016-05-01

    Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.

  2. Nested sampling applied in Bayesian room-acoustics decay analysis.

    Science.gov (United States)

    Jasa, Tomislav; Xiang, Ning

    2012-11-01

    Room-acoustic energy decays often exhibit single-rate or multiple-rate characteristics in a wide variety of rooms/halls. Both the energy decay order and decay parameter estimation are of practical significance in architectural acoustics applications, representing two different levels of Bayesian probabilistic inference. This paper discusses a model-based sound energy decay analysis within a Bayesian framework utilizing the nested sampling algorithm. The nested sampling algorithm is specifically developed to evaluate the Bayesian evidence required for determining the energy decay order with decay parameter estimates as a secondary result. Taking the energy decay analysis in architectural acoustics as an example, this paper demonstrates that two different levels of inference, decay model-selection and decay parameter estimation, can be cohesively accomplished by the nested sampling algorithm. PMID:23145609

  3. A Bayesian framework for simultaneously modeling neural and behavioral data

    NARCIS (Netherlands)

    B.M. Turner; B.U. Forstmann; E.-J. Wagenmakers; S.D. Brown; P.B. Sederberg; M. Steyvers

    2013-01-01

    Scientists who study cognition infer underlying processes either by observing behavior (e.g., response times, percentage correct) or by observing neural activity (e.g., the BOLD response). These two types of observations have traditionally supported two separate lines of study. The first is led by c

  4. Identification of information tonality based on Bayesian approach and neural networks

    OpenAIRE

    Lande, D. V.

    2008-01-01

    A model of the identification of information tonality, based on Bayesian approach and neural networks was described. In the context of this paper tonality means positive or negative tone of both the whole information and its parts which are related to particular concepts. The method, its application is presented in the paper, is based on statistic regularities connected with the presence of definite lexemes in the texts. A distinctive feature of the method is its simplicity and versatility. A...

  5. Applying Bayesian belief networks in rapid response situations

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, William L [Los Alamos National Laboratory; Deborah, Leishman, A. [Los Alamos National Laboratory; Van Eeckhout, Edward [Los Alamos National Laboratory

    2008-01-01

    The authors have developed an enhanced Bayesian analysis tool called the Integrated Knowledge Engine (IKE) for monitoring and surveillance. The enhancements are suited for Rapid Response Situations where decisions must be made based on uncertain and incomplete evidence from many diverse and heterogeneous sources. The enhancements extend the probabilistic results of the traditional Bayesian analysis by (1) better quantifying uncertainty arising from model parameter uncertainty and uncertain evidence, (2) optimizing the collection of evidence to reach conclusions more quickly, and (3) allowing the analyst to determine the influence of the remaining evidence that cannot be obtained in the time allowed. These extended features give the analyst and decision maker a better comprehension of the adequacy of the acquired evidence and hence the quality of the hurried decisions. They also describe two example systems where the above features are highlighted.

  6. Advanced Neural Network Applied In Engineering Science

    OpenAIRE

    Nikita Patel*; Rakesh Patel,

    2014-01-01

    The basic idea behind a neural network is to simulate (copy in a simplified but reasonably faithful way) lots of densely interconnected brain cells inside a computer so you can get it to learn things, recognize patterns, and make decisions in a humanlike way. The amazing thing about a neural network is that you don't have to program it to learn explicitly: it learns all by itself, just like a brain! But it isn't a brain. It's important to note that neural networks are (generally) ...

  7. Bayesian neural networks for bivariate binary data: an application to prostate cancer study.

    Science.gov (United States)

    Chakraborty, Sounak; Ghosh, Malay; Maiti, Tapabrata; Tewari, Ashutosh

    2005-12-15

    Prostate cancer is one of the most common cancers in American men. The cancer could either be locally confined, or it could spread outside the organ. When locally confined, there are several options for treating and curing this disease. Otherwise, surgery is the only option, and in extreme cases of outside spread, it could very easily recur within a short time even after surgery and subsequent radiation therapy. Hence, it is important to know, based on pre-surgery biopsy results how likely the cancer is organ-confined or not. The paper considers a hierarchical Bayesian neural network approach for posterior prediction probabilities of certain features indicative of non-organ confined prostate cancer. In particular, we find such probabilities for margin positivity (MP) and seminal vesicle (SV) positivity jointly. The available training set consists of bivariate binary outcomes indicating the presence or absence of the two. In addition, we have certain covariates such as prostate specific antigen (PSA), gleason score and the indicator for the cancer to be unilateral or bilateral (i.e. spread on one or both sides) in one data set and gene expression microarrays in another data set. We take a hierarchical Bayesian neural network approach to find the posterior prediction probabilities for a test and validation set, and compare these with the actual outcomes for the first data set. In case of the microarray data we use leave one out cross-validation to access the accuracy of our method. We also demonstrate the superiority of our method to the other competing methods through a simulation study. The Bayesian procedure is implemented by an application of the Markov chain Monte Carlo numerical integration technique. For the problem at hand, our Bayesian bivariate neural network procedure is shown to be superior to the classical neural network, Radford Neal's Bayesian neural network as well as bivariate logistic models to predict jointly the MP and SV in a patient in both the

  8. Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points

    Directory of Open Access Journals (Sweden)

    K. K. Aggarwal

    2005-01-01

    Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.

  9. Forecasting Rainfall Time Series with stochastic output approximated by neural networks Bayesian approach

    Directory of Open Access Journals (Sweden)

    Cristian Rodriguez Rivero

    2014-07-01

    Full Text Available The annual estimate of the availability of the amount of water for the agricultural sector has become a lifetime in places where rainfall is scarce, as is the case of northwestern Argentina. This work proposes to model and simulate monthly rainfall time series from one geographical location of Catamarca, Valle El Viejo Portezuelo. In this sense, the time series prediction is mathematical and computational modelling series provided by monthly cumulative rainfall, which has stochastic output approximated by neural networks Bayesian approach. We propose to use an algorithm based on artificial neural networks (ANNs using the Bayesian inference. The result of the prediction consists of 20% of the provided data consisting of 2000 to 2010. A new analysis for modelling, simulation and computational prediction of cumulative rainfall from one geographical location is well presented. They are used as data information, only the historical time series of daily flows measured in mmH2O. Preliminary results of the annual forecast in mmH2O with a prediction horizon of one year and a half are presented, 18 months, respectively. The methodology employs artificial neural network based tools, statistical analysis and computer to complete the missing information and knowledge of the qualitative and quantitative behavior. They also show some preliminary results with different prediction horizons of the proposed filter and its comparison with the performance Gaussian process filter used in the literature.

  10. Applying neural networks to optimize instrumentation performance

    International Nuclear Information System (INIS)

    Well calibrated instrumentation is essential in providing meaningful information about the status of a plant. Signals from plant instrumentation frequently have inherent non-linearities, may be affected by environmental conditions and can therefore cause calibration difficulties for the people who maintain them. Two neural network approaches are described in this paper for improving the accuracy of a non-linear, temperature sensitive level probe ised in Expermental Breeder Reactor II (EBR-II) that was difficult to calibrate

  11. Identification of information tonality based on Bayesian approach and neural networks

    CERN Document Server

    Lande, D V

    2008-01-01

    A model of the identification of information tonality, based on Bayesian approach and neural networks was described. In the context of this paper tonality means positive or negative tone of both the whole information and its parts which are related to particular concepts. The method, its application is presented in the paper, is based on statistic regularities connected with the presence of definite lexemes in the texts. A distinctive feature of the method is its simplicity and versatility. At present ideologically similar approaches are widely used to control spam.

  12. Artificial Neural Network applied to lightning flashes

    Science.gov (United States)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a

  13. The classification of oximetry signals using Bayesian neural networks to assist in the detection of obstructive sleep apnoea syndrome

    International Nuclear Information System (INIS)

    In the present study, multilayer perceptron (MLP) neural networks were applied to help in the diagnosis of obstructive sleep apnoea syndrome (OSAS). Oxygen saturation (SaO2) recordings from nocturnal pulse oximetry were used for this purpose. We performed time and spectral analysis of these signals to extract 14 features related to OSAS. The performance of two different MLP classifiers was compared: maximum likelihood (ML) and Bayesian (BY) MLP networks. A total of 187 subjects suspected of suffering from OSAS took part in the study. Their SaO2 signals were divided into a training set with 74 recordings and a test set with 113 recordings. BY-MLP networks achieved the best performance on the test set with 85.58% accuracy (87.76% sensitivity and 82.39% specificity). These results were substantially better than those provided by ML-MLP networks, which were affected by overfitting and achieved an accuracy of 76.81% (86.42% sensitivity and 62.83% specificity). Our results suggest that the Bayesian framework is preferred to implement our MLP classifiers. The proposed BY-MLP networks could be used for early OSAS detection. They could contribute to overcome the difficulties of nocturnal polysomnography (PSG) and thus reduce the demand for these studies

  14. Applying neural networks to ultrasonographic texture recognition

    Science.gov (United States)

    Gallant, Jean-Francois; Meunier, Jean; Stampfler, Robert; Cloutier, Jocelyn

    1993-09-01

    A neural network was trained to classify ultrasound image samples of normal, adenomatous (benign tumor) and carcinomatous (malignant tumor) thyroid gland tissue. The samples themselves, as well as their Fourier spectrum, miscellaneous cooccurrence matrices and 'generalized' cooccurrence matrices, were successively submitted to the network, to determine if it could be trained to identify discriminating features of the texture of the image, and if not, which feature extractor would give the best results. Results indicate that the network could indeed extract some distinctive features from the textures, since it could accomplish a partial classification when trained with the samples themselves. But a significant improvement both in learning speed and performance was observed when it was trained with the generalized cooccurrence matrices of the samples.

  15. Delayed switching applied to memristor neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Frank Z.; Yang Xiao; Lim Guan [Future Computing Group, School of Computing, University of Kent, Canterbury (United Kingdom); Helian Na [School of Computer Science, University of Hertfordshire, Hatfield (United Kingdom); Wu Sining [Xyratex, Havant (United Kingdom); Guo Yike [Department of Computing, Imperial College, London (United Kingdom); Rashid, Md Mamunur [CERN, Geneva (Switzerland)

    2012-04-01

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  16. Delayed switching applied to memristor neural networks

    International Nuclear Information System (INIS)

    Magnetic flux and electric charge are linked in a memristor. We reported recently that a memristor has a peculiar effect in which the switching takes place with a time delay because a memristor possesses a certain inertia. This effect was named the ''delayed switching effect.'' In this work, we elaborate on the importance of delayed switching in a brain-like computer using memristor neural networks. The effect is used to control the switching of a memristor synapse between two neurons that fire together (the Hebbian rule). A theoretical formula is found, and the design is verified by a simulation. We have also built an experimental setup consisting of electronic memristive synapses and electronic neurons.

  17. Nuclear mass predictions for the crustal composition of neutron stars: A Bayesian neural network approach

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.; Prosper, H. B.

    2016-01-01

    Background: Besides their intrinsic nuclear-structure value, nuclear mass models are essential for astrophysical applications, such as r -process nucleosynthesis and neutron-star structure. Purpose: To overcome the intrinsic limitations of existing "state-of-the-art" mass models through a refinement based on a Bayesian neural network (BNN) formalism. Methods: A novel BNN approach is implemented with the goal of optimizing mass residuals between theory and experiment. Results: A significant improvement (of about 40%) in the mass predictions of existing models is obtained after BNN refinement. Moreover, these improved results are now accompanied by proper statistical errors. Finally, by constructing a "world average" of these predictions, a mass model is obtained that is used to predict the composition of the outer crust of a neutron star. Conclusions: The power of the Bayesian neural network method has been successfully demonstrated by a systematic improvement in the accuracy of the predictions of nuclear masses. Extension to other nuclear observables is a natural next step that is currently under investigation.

  18. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    KAUST Repository

    Zhang, Xuesong

    2011-11-01

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework (BNN-PIS) to incorporate the uncertainties associated with parameters, inputs, and structures into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform BNNs that only consider uncertainties associated with parameters and model structures. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters shows that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of and interactions among different uncertainty sources is expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting. © 2011 Elsevier B.V.

  19. Predicting complex quantitative traits with Bayesian neural networks: a case study with Jersey cows and wheat

    Directory of Open Access Journals (Sweden)

    Okut Hayrettin

    2011-10-01

    Full Text Available Abstract Background In the study of associations between genomic data and complex phenotypes there may be relationships that are not amenable to parametric statistical modeling. Such associations have been investigated mainly using single-marker and Bayesian linear regression models that differ in their distributions, but that assume additive inheritance while ignoring interactions and non-linearity. When interactions have been included in the model, their effects have entered linearly. There is a growing interest in non-parametric methods for predicting quantitative traits based on reproducing kernel Hilbert spaces regressions on markers and radial basis functions. Artificial neural networks (ANN provide an alternative, because these act as universal approximators of complex functions and can capture non-linear relationships between predictors and responses, with the interplay among variables learned adaptively. ANNs are interesting candidates for analysis of traits affected by cryptic forms of gene action. Results We investigated various Bayesian ANN architectures using for predicting phenotypes in two data sets consisting of milk production in Jersey cows and yield of inbred lines of wheat. For the Jerseys, predictor variables were derived from pedigree and molecular marker (35,798 single nucleotide polymorphisms, SNPS information on 297 individually cows. The wheat data represented 599 lines, each genotyped with 1,279 markers. The ability of predicting fat, milk and protein yield was low when using pedigrees, but it was better when SNPs were employed, irrespective of the ANN trained. Predictive ability was even better in wheat because the trait was a mean, as opposed to an individual phenotype in cows. Non-linear neural networks outperformed a linear model in predictive ability in both data sets, but more clearly in wheat. Conclusion Results suggest that neural networks may be useful for predicting complex traits using high

  20. Robust Bayesian decision theory applied to optimal dosage.

    Science.gov (United States)

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. PMID:15057878

  1. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    Science.gov (United States)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in

  2. Technical note: An R package for fitting Bayesian regularized neural networks with applications in animal breeding.

    Science.gov (United States)

    Pérez-Rodríguez, P; Gianola, D; Weigel, K A; Rosa, G J M; Crossa, J

    2013-08-01

    In recent years, several statistical models have been developed for predicting genetic values for complex traits using information on dense molecular markers, pedigrees, or both. These models include, among others, the Bayesian regularized neural networks (BRNN) that have been widely used in prediction problems in other fields of application and, more recently, for genome-enabled prediction. The R package described here (brnn) implements BRNN models and extends these to include both additive and dominance effects. The implementation takes advantage of multicore architectures via a parallel computing approach using openMP (Open Multiprocessing) for the computations. This note briefly describes the classes of models that can be fitted using the brnn package, and it also illustrates its use through several real examples. PMID:23658327

  3. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  4. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  5. Assessing Vermont's stream health and biological integrity using artificial neural networks and Bayesian methods

    Science.gov (United States)

    Rizzo, D. M.; Fytilis, N.; Stevens, L.

    2012-12-01

    Environmental managers are increasingly required to monitor and forecast long-term effects and vulnerability of biophysical systems to human-generated stresses. Ideally, a study involving both physical and biological assessments conducted concurrently (in space and time) could provide a better understanding of the mechanisms and complex relationships. However, costs and resources associated with monitoring the complex linkages between the physical, geomorphic and habitat conditions and the biological integrity of stream reaches are prohibitive. Researchers have used classification techniques to place individual streams and rivers into a broader spatial context (hydrologic or health condition). Such efforts require environmental managers to gather multiple forms of information - quantitative, qualitative and subjective. We research and develop a novel classification tool that combines self-organizing maps with a Naïve Bayesian classifier to direct resources to stream reaches most in need. The Vermont Agency of Natural Resources has developed and adopted protocols for physical stream geomorphic and habitat assessments throughout the state of Vermont. Separate from these assessments, the Vermont Department of Environmental Conservation monitors the biological communities and the water quality in streams. Our initial hypothesis is that the geomorphic reach assessments and water quality data may be leveraged to reduce error and uncertainty associated with predictions of biological integrity and stream health. We test our hypothesis using over 2500 Vermont stream reaches (~1371 stream miles) assessed by the two agencies. In the development of this work, we combine a Naïve Bayesian classifier with a modified Kohonen Self-Organizing Map (SOM). The SOM is an unsupervised artificial neural network that autonomously analyzes inherent dataset properties using input data only. It is typically used to cluster data into similar categories when a priori classes do not exist. The

  6. Forecasting Baltic Dirty Tanker Index by Applying Wavelet Neural Networks

    DEFF Research Database (Denmark)

    Fan, Shuangrui; JI, TINGYUN; Bergqvist, Rickard; Wilmsmeier, Gordon

    2013-01-01

    Baltic Exchange Dirty Tanker Index (BDTI) is an important assessment index in world dirty tanker shipping industry. Actors in the industry sector can gain numerous benefits from accurate forecasting of the BDTI. However, limitations exist in traditional stochastic and econometric explanation...... modeling techniques used in freight rate forecasting. At the same time research in shipping index forecasting e.g. BDTI applying artificial intelligent techniques is scarce. This analyses the possibilities to forecast the BDTI by applying Wavelet Neural Networks (WNN). Firstly, the characteristics of...

  7. Bayesian survival analysis modeling applied to sensory shelf life of foods

    OpenAIRE

    Calle, M. Luz; Hough, Guillermo; Curia, Ana; Gómez, Guadalupe

    2006-01-01

    Data from sensory shelf-life studies can be analyzed using survival statistical methods. The objective of this research was to introduce Bayesian methodology to sensory shelf-life studies and discuss its advantages in relation to classical (frequentist) methods. A specific algorithm which incorporates the interval censored data from shelf-life studies is presented. Calculations were applied to whole-fat and fat-free yogurt, each tasted by 80 consumers who answered ‘‘yes’’ or ‘‘no’’ t...

  8. Bayesian Inference for Neural Electromagnetic Source Localization: Analysis of MEG Visual Evoked Activity

    International Nuclear Information System (INIS)

    We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented

  9. Bayesian Inference for Neural Electromagnetic Source Localization: Analysis of MEG Visual Evoked Activity

    Energy Technology Data Exchange (ETDEWEB)

    George, J.S.; Schmidt, D.M.; Wood, C.C.

    1999-02-01

    We have developed a Bayesian approach to the analysis of neural electromagnetic (MEG/EEG) data that can incorporate or fuse information from other imaging modalities and addresses the ill-posed inverse problem by sarnpliig the many different solutions which could have produced the given data. From these samples one can draw probabilistic inferences about regions of activation. Our source model assumes a variable number of variable size cortical regions of stimulus-correlated activity. An active region consists of locations on the cortical surf ace, within a sphere centered on some location in cortex. The number and radi of active regions can vary to defined maximum values. The goal of the analysis is to determine the posterior probability distribution for the set of parameters that govern the number, location, and extent of active regions. Markov Chain Monte Carlo is used to generate a large sample of sets of parameters distributed according to the posterior distribution. This sample is representative of the many different source distributions that could account for given data, and allows identification of probable (i.e. consistent) features across solutions. Examples of the use of this analysis technique with both simulated and empirical MEG data are presented.

  10. Applying a Bayesian Approach to Identification of Orthotropic Elastic Constants from Full Field Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Le Riche R.

    2010-06-01

    Full Text Available A major challenge in the identification of material properties is handling different sources of uncertainty in the experiment and the modelling of the experiment for estimating the resulting uncertainty in the identified properties. Numerous improvements in identification methods have provided increasingly accurate estimates of various material properties. However, characterizing the uncertainty in the identified properties is still relatively crude. Different material properties obtained from a single test are not obtained with the same confidence. Typically the highest uncertainty is associated with respect to properties to which the experiment is the most insensitive. In addition, the uncertainty in different properties can be strongly correlated, so that obtaining only variance estimates may be misleading. A possible approach for handling the different sources of uncertainty and estimating the uncertainty in the identified properties is the Bayesian method. This method was introduced in the late 1970s in the context of identification [1] and has been applied since to different problems, notably identification of elastic constants from plate vibration experiments [2]-[4]. The applications of the method to these classical pointwise tests involved only a small number of measurements (typically ten natural frequencies in the previously cited vibration test which facilitated the application of the Bayesian approach. For identifying elastic constants, full field strain or displacement measurements provide a high number of measured quantities (one measurement per image pixel and hence a promise of smaller uncertainties in the properties. However, the high number of measurements represents also a major computational challenge in applying the Bayesian approach to full field measurements. To address this challenge we propose an approach based on the proper orthogonal decomposition (POD of the full fields in order to drastically reduce their

  11. Fuzzy neural network methodology applied to medical diagnosis

    Science.gov (United States)

    Gorzalczany, Marian B.; Deutsch-Mcleish, Mary

    1992-01-01

    This paper presents a technique for building expert systems that combines the fuzzy-set approach with artificial neural network structures. This technique can effectively deal with two types of medical knowledge: a nonfuzzy one and a fuzzy one which usually contributes to the process of medical diagnosis. Nonfuzzy numerical data is obtained from medical tests. Fuzzy linguistic rules describing the diagnosis process are provided by a human expert. The proposed method has been successfully applied in veterinary medicine as a support system in the diagnosis of canine liver diseases.

  12. Study of Single Top Quark Production Using Bayesian Neural Networks With D0 Detector at the Tevatron

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, Jyoti [Panjab Univ., Chandigarh (India)

    2012-01-01

    Top quark, the heaviest and most intriguing among the six known quarks, can be created via two independent production mechanisms in {\\pp} collisions. The primary mode, strong {\\ttbar} pair production from a $gtt$ vertex, was used by the {\\d0} and CDF collaborations to establish the existence of the top quark in March 1995. The second mode is the electroweak production of a single top quark or antiquark, which has been observed recently in March 2009. Since single top quarks are produced at hadron colliders through a $Wtb$ vertex, thereby provide a direct probe of the nature of $Wtb$ coupling and of the Cabibbo-Kobayashi-Maskawa matrix element, $V_{tb}$. So this mechanism provides a sensitive probe for several, standard model and beyond standard model, parameters such as anomalous $Wtb$ couplings. In this thesis, we measure the cross section of the electroweak produced top quark in three different production modes, $s+t$, $s$ and $t$-channels using a technique based on the Bayesian neural networks. This technique is applied for analysis of the 5.4 $fb^{-1}$ of data collected by the {\\d0} detector. From a comparison of the Bayesian neural networks discriminants between data and the signal-background model using Bayesian statistics, the cross sections of the top quark produced through the electroweak mechanism have been measured as: \\[\\sigma(p\\bar{p}→tb+X,tqb+X) = 3.11^{+0.77}_{-0.71}\\;\\rm pb\\] \\[\\sigma(p\\bar{p}→tb+X) = 0.72^{+0.44}_{-0.43}\\;\\rm pb\\] \\[\\sigma(p\\bar{p}→tqb+X) = 2.92^{+0.87}_{-0.73}\\;\\rm pb\\] % The $s+t$-channel has a gaussian significance of $4.7\\sigma$, the $s$-channel $0.9\\sigma$ and the $t$-channel~$4.7\\sigma$. The results are consistent with the standard model predictions within one standard deviation. By combining these results with the results for two other analyses (using different MVA techniques) improved results \\[\\sigma(p\\bar{p}→tb+X,tqb+X) = 3.43^{+0.73}_{-0.74}\\;\\rm pb\\] \\[\\sigma

  13. Bayesian Estimation Applied to Multiple Species: Towards cosmology with a million supernovae

    CERN Document Server

    Kunz, M; Hlozek, R; Kunz, Martin; Bassett, Bruce A.; Hlozek, Renee

    2006-01-01

    Observed data is often contaminated by undiscovered interlopers, leading to biased parameter estimation. Here we present BEAMS (Bayesian Estimation Applied to Multiple Species) which significantly improves on the standard maximum likelihood approach in the case where the probability for each data point being `pure' is known. We discuss the application of BEAMS to future Type Ia supernovae (SNIa) surveys, such as LSST, which are projected to deliver over a million supernovae lightcurves without spectra. The multi-band lightcurves for each candidate will provide a probability of being Ia (pure) but the full sample will be significantly contaminated with other types of supernovae and transients. Given a sample of N supernovae with mean probability, P, of being Ia, BEAMS delivers parameter constraints equal to NP spectroscopically-confirmed SNIa. In addition BEAMS can be simultaneously used to tease apart different families of data and to recover properties of the underlying distributions of those families (e.g. ...

  14. Application of Bayesian Neural Networks to Energy Reconstruction in EAS Experiments for ground-based TeV Astrophysics

    CERN Document Server

    Bai, Ying; Lan, JieQin; Gao, WeiWei

    2016-01-01

    A toy detector array has been designed to simulate the detection of cosmic rays in Extended Air Shower(EAS) Experiments for ground-based TeV Astrophysics. The primary energies of protons from the Monte-Carlo simulation have been reconstructed by the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment\\cite{lhaaso-ma}, respectively. The result of the energy reconstruction using BNNs has been compared with the one using the standard method. Compared to the standard method, the energy resolutions are significantly improved using BNNs. And the improvement is more obvious for the high energy protons than the low energy ones.

  15. Application of Bayesian neural networks to energy reconstruction in EAS experiments for ground-based TeV astrophysics

    Science.gov (United States)

    Bai, Y.; Xu, Y.; Pan, J.; Lan, J. Q.; Gao, W. W.

    2016-07-01

    A toy detector array is designed to detect a shower generated by the interaction between a TeV cosmic ray and the atmosphere. In the present paper, the primary energies of showers detected by the detector array are reconstructed with the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment [1], respectively. Compared to the standard method, the energy resolutions are significantly improved using the BNNs. And the improvement is more obvious for the high energy showers than the low energy ones.

  16. EDITORIAL: Special issue on applied neurodynamics: from neural dynamics to neural engineering Special issue on applied neurodynamics: from neural dynamics to neural engineering

    Science.gov (United States)

    Chiel, Hillel J.; Thomas, Peter J.

    2011-12-01

    , the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools

  17. GMDH and neural networks applied in temperature sensors monitoring

    International Nuclear Information System (INIS)

    In this work a monitoring system was developed based on the Group Method of Data Handling (GMDH) and Neural Networks (ANNs) methodologies. This methodology was applied to the IEA-R1 research reactor at IPEN by using a database obtained from a theoretical model of the reactor. The IEA-R1 research reactor is a pool type reactor of 5 MW, cooled and moderated by light water, and uses graphite and beryllium as reflector. The theoretical model was developed using the Matlab GUIDE toolbox. The equations are based in the IEA-R1 mass and energy inventory balance and physical as well as operational aspects are taken into consideration. This methodology was developed by using the GMDH algorithm as input variables to the ANNs. The results obtained using the GMDH and ANNs were better than that obtained using only ANNs. (author)

  18. Assessing uncertainty in climate change impacts on water resources: Bayesian neural network approach

    International Nuclear Information System (INIS)

    Climate change impact studies on water resources have so far provided results difficult to use for policy decision and planning of adaptation measures because of the lack of robust uncertainty estimates. There are various sources of uncertainty due to the global circulation models (GCMs) or the regional climate models (RCMs), the emission scenarios, the downscaling techniques, and the hydrological models. The estimation of the overall impact of those uncertainties on the future streamflow or reservoir inflow simulations at the watershed scale remains a difficult and challenging task. The use of multi-model super-ensembles in order to capture the wide range of uncertainties is cumbersome and requires large computational and human resources. As an alternative, a Bayesian Neural Network (BNN) approach is proposed as an effective hydrologic modeling tool for simulating future flows with uncertainty estimates. The BNN model is used with two versions of Canadian GCMs (CGCM1 and CGCM2) with two emission scenarios (SRES B2 and IPCC IS92a), and with one well established statistical downscaling model (SDSM) to simulate daily river flow and reservoir inflow in the Serpent River and the Chute-du-Diable watersheds in northern Quebec. It is found that the 95% uncertainty bands of the BNN mean ensemble flow (i.e. flow simulated using the mean ensemble of downscaled meteorological variables) is capable of encompassing all other possible flows corresponding to various individual downscaled meteorological ensembles whatever the CGCM and the emission scenario used. Specifically, this indicates that the BNN model confidence intervals are capable of including all possible flow variations due to various ensembles of downscaled meteorological variables from two different CGCMs and emission scenarios. Furthermore, the confidence limits of the BNN model also encompasses the flows simulated using another conceptual hydrologic model (namely HBV) whatever the GCM and the emission scenario

  19. a Simplified Bayesian Network Model Applied in Crop or Animal Disease Diagnosis

    Science.gov (United States)

    Yu, Helong; Chen, Guifen; Liu, Dayou

    Bayesian network is a powerful tool to represent and deal with uncertain knowledge. There exists much uncertainty in crop or animal disease. The construction of Bayesian network need much data and knowledge. But when data is scarce, some methods should be adopted to construct an effective Bayesian network. This paper introduces a disease diagnosis model based on Bayesian network, which is two-layered and obeys noisy-or assumption. Based on the two-layered structure, the relationship between nodes is obtained by domain knowledge. Based on the noisy-model, the conditional probability table is elicited by three methods, which are parameter learning, domain expert and the existing certainty factor model. In order to implement this model, a Bayesian network tool is developed. Finally, an example about cow disease diagnosis was implemented, which proved that the model discussed in this paper is an effective tool for some simple disease diagnosis in crop or animal field.

  20. Neural Networks Applied to Thermal Damage Classification in Grinding Process

    OpenAIRE

    Spadotto, Marcelo M.; Aguiar, Paulo Roberto de; Sousa, Carlos C. P.; Bianchi, Eduardo C.

    2008-01-01

    The utilization of neural network of type multi-layer perceptron using the back-propagation algorithm guaranteed very good results. Tests carried out in order to optimize the learning capacity of neural networks were of utmost importance in the training phase, where the optimum values for the number of neurons of the hidden layer, learning rate and momentum for each structure were determined. Once the architecture of the neural network was established with those optimum values, the mean squar...

  1. Artificial neural networks applied to forecasting time series

    OpenAIRE

    Montaño Moreno, Juan José; Palmer Pol, Alfonso; Muñoz Gracia, María del Pilar

    2011-01-01

    This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparativ...

  2. Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A Neural Network and Bayesian Analyses

    Science.gov (United States)

    Galbraith, Craig S.; Merrill, Gregory B.; Kline, Doug M.

    2012-01-01

    In this study we investigate the underlying relational structure between student evaluations of teaching effectiveness (SETEs) and achievement of student learning outcomes in 116 business related courses. Utilizing traditional statistical techniques, a neural network analysis and a Bayesian data reduction and classification algorithm, we find…

  3. Applying artificial neural networks in nuclear power plant diagnostics

    International Nuclear Information System (INIS)

    Artificial neural networks are very effective tools in solving failure detection problems in complex plants such as nuclear power reactors and their subsidiary equipments, as they can perform parallel realizations of complicated classification processes. In the paper, after a brief historical and methodological introduction, a neural network based failure detection system is presented which has been developed for the use in the PWR units of the Nuclear Power Plant Paks (Hungary). A cellular processor array has been used to realize a back-propagation type neural network which can detect changes in the spectral features of the measured signals through off-line supervised learning processes. (authors)

  4. Can Artificial Neural Networks be Applied in Seismic Predicition? Preliminary Analysis Applying Radial Topology. Case: Mexico

    CERN Document Server

    Mota-Hernandez, Cinthya; Alvarado-Corona, Rafael

    2014-01-01

    Tectonic earthquakes of high magnitude can cause considerable losses in terms of human lives, economic and infrastructure, among others. According to an evaluation published by the U.S. Geological Survey, 30 is the number of earthquakes which have greatly impacted Mexico from the end of the XIX century to this one. Based upon data from the National Seismological Service, on the period between January 1, 2006 and May 1, 2013 there have occurred 5,826 earthquakes which magnitude has been greater than 4.0 degrees on the Richter magnitude scale (25.54% of the total of earthquakes registered on the national territory), being the Pacific Plate and the Cocos Plate the most important ones. This document describes the development of an Artificial Neural Network (ANN) based on the radial topology which seeks to generate a prediction with an error margin lower than 20% which can inform about the probability of a future earthquake one of the main questions is: can artificial neural networks be applied in seismic forecast...

  5. Topographic factor analysis: a Bayesian model for inferring brain networks from neural data.

    Directory of Open Access Journals (Sweden)

    Jeremy R Manning

    Full Text Available The neural patterns recorded during a neuroscientific experiment reflect complex interactions between many brain regions, each comprising millions of neurons. However, the measurements themselves are typically abstracted from that underlying structure. For example, functional magnetic resonance imaging (fMRI datasets comprise a time series of three-dimensional images, where each voxel in an image (roughly reflects the activity of the brain structure(s-located at the corresponding point in space-at the time the image was collected. FMRI data often exhibit strong spatial correlations, whereby nearby voxels behave similarly over time as the underlying brain structure modulates its activity. Here we develop topographic factor analysis (TFA, a technique that exploits spatial correlations in fMRI data to recover the underlying structure that the images reflect. Specifically, TFA casts each brain image as a weighted sum of spatial functions. The parameters of those spatial functions, which may be learned by applying TFA to an fMRI dataset, reveal the locations and sizes of the brain structures activated while the data were collected, as well as the interactions between those structures.

  6. ECO INVESTMENT PROJECT MANAGEMENT THROUGH TIME APPLYING ARTIFICIAL NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    Tamara Gvozdenović

    2007-06-01

    Full Text Available he concept of project management expresses an indispensable approach to investment projects. Time is often the most important factor in these projects. The artificial neural network is the paradigm of data processing, which is inspired by the one used by the biological brain, and it is used in numerous, different fields, among which is the project management. This research is oriented to application of artificial neural networks in managing time of investment project. The artificial neural networks are used to define the optimistic, the most probable and the pessimistic time in PERT method. The program package Matlab: Neural Network Toolbox is used in data simulation. The feed-forward back propagation network is chosen.

  7. Artificial Neural Networks Applied To Landslide Hazard Assessment

    Science.gov (United States)

    Casagli, N.; Catani, F.; Ermini, L.

    Landslide hazard mapping is often performed through the identification and analysis of hillslope instability factors. GIS techniques are widely applied for the manage- ment of hillslope factors as thematic data rated by the attribution of scores based on the assumed role played by each factor controlling the development of a sliding pro- cess. Other more refined methods, based on the principle that the present and the past are keys to the future, have been also developed, thus allowing to perform less sub- jective analyses, in which landslide susceptibility is assessed by statistical relation- ships between the past landslides and the hillslope instability factors. The objective of this research is to define a method able to foresee landslide susceptibility through the application of Artificial Neural Networks (ANN). The Riomaggiore catchment, a sub-watershed of the Reno River basin located in the Northern Apennine at half way between Florence and Bologna, was chosen as the test site. The utilized ANN (AiNet 1.25) was trained by vector-based GIS data corresponding to five hillslope factors: a) geology, b) slope, c), curvature, d) land cover e) contributing area. The intersection between the hillslope factors, all ranked in nominal scales, singled out 3263 homoge- neous domains (Unique Condition Unit) containing unique combinations of hillslope factors. The final model was formed by vectors in which the hillslope factors, once organized as Boolean variables, are represented by 20 binary numbers. The compari- son between the most recent inventory of the landslides in the Riomaggiore catchment and the hazardous areas, as predicted by the ANN, showed very satisfactory results and allowed us to validate the methodology.

  8. EXONEST: Bayesian model selection applied to the detection and characterization of exoplanets via photometric variations

    Energy Technology Data Exchange (ETDEWEB)

    Placek, Ben; Knuth, Kevin H. [Physics Department, University at Albany (SUNY), Albany, NY 12222 (United States); Angerhausen, Daniel, E-mail: bplacek@albany.edu, E-mail: kknuth@albany.edu, E-mail: daniel.angerhausen@gmail.com [Department of Physics, Applied Physics, and Astronomy, Rensselear Polytechnic Institute, Troy, NY 12180 (United States)

    2014-11-10

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.

  9. Prediction of fracture toughness temperature dependence applying neural network

    Czech Academy of Sciences Publication Activity Database

    Dlouhý, Ivo; Hadraba, Hynek; Chlup, Zdeněk; Šmída, T.

    2011-01-01

    Roč. 11, č. 1 (2011), s. 9-14. ISSN 1451-3749 R&D Projects: GA ČR(CZ) GAP108/10/0466 Institutional research plan: CEZ:AV0Z20410507 Keywords : brittle to ductile transition * fracture toughness * artificial neural network * steels Subject RIV: JL - Materials Fatigue, Friction Mechanics

  10. LVQ and backpropagation neural networks applied to NASA SSME data

    Science.gov (United States)

    Doniere, Timothy F.; Dhawan, Atam P.

    1993-01-01

    Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.

  11. Convolutional Neural Networks Applied to House Numbers Digit Classification

    CERN Document Server

    Sermanet, Pierre; LeCun, Yann

    2012-01-01

    We classify digits of real-world house numbers using convolutional neural networks (ConvNets). ConvNets are hierarchical feature learning neural networks whose structure is biologically inspired. Unlike many popular vision approaches that are hand-designed, ConvNets can automatically learn a unique set of features optimized for a given task. We augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establish a new state-of-the-art of 94.85% accuracy on the SVHN dataset (45.2% error improvement). Furthermore, we analyze the benefits of different pooling methods and multi-stage features in ConvNets. The source code and a tutorial are available at eblearn.sf.net.

  12. Radial basis function neural networks applied to NASA SSME data

    Science.gov (United States)

    Wheeler, Kevin R.; Dhawan, Atam P.

    1993-01-01

    This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.

  13. The Theory of Neural Cognition Applied to Robotics

    OpenAIRE

    Claude F. Touzet

    2015-01-01

    The Theory of neural Cognition (TnC) states that the brain does not process information, it only represents information (i.e., it is 'only' a memory). The TnC explains how a memory can become an actor pursuing various goals, and proposes explanations concerning the implementation of a large variety of cognitive abilities, such as attention, memory, language, planning, intelligence, emotions, motivation, pleasure, consciousness and personality. The explanatory power of this new framework exten...

  14. Proteomics Applied to Porcine and Human Neural Stem Cell Differentiation

    Czech Academy of Sciences Publication Activity Database

    Mairychová, Kateřina; Skalníková, Helena; Tylečková, Jiřina; Halada, Petr; Marsala, M.; Kovářová, Hana

    Liběchov : Institute of Animal Physiology and Genetics AS CR, v.v.i, 2010. s. 61-61. [Informal Proteomic Meeting 2010. 09.11.2010-10.11.2010, Liblice] R&D Projects: GA MŠk 1M0538; GA MŠk(CZ) ME10044 Institutional research plan: CEZ:AV0Z50450515; CEZ:AV0Z50200510 Keywords : proteomics * cell differentiation * neural stem cell s Subject RIV: FH - Neurology

  15. Prediction of fracture toughness temperature dependence applying neural Network

    Czech Academy of Sciences Publication Activity Database

    Dlouhý, Ivo; Hadraba, Hynek; Chlup, Zdeněk; Šmida, T.

    Metz: LaBPS, 2010, s. 1-9. [NT2F10 – New Trends in Fatigue and Fracture CongressMetz. Metz (FR), 30.08.2010-01.09.2010] R&D Projects: GA ČR(CZ) GAP107/10/0361 Institutional research plan: CEZ:AV0Z20410507 Keywords : brittle to ductile transition * fracture toughness * artificial neural network

  16. A hybrid approach to monthly streamflow forecasting: Integrating hydrological model outputs into a Bayesian artificial neural network

    Science.gov (United States)

    Humphrey, Greer B.; Gibbs, Matthew S.; Dandy, Graeme C.; Maier, Holger R.

    2016-09-01

    Monthly streamflow forecasts are needed to support water resources decision making in the South East of South Australia, where baseflow represents a significant proportion of the total streamflow and soil moisture and groundwater are important predictors of runoff. To address this requirement, the utility of a hybrid monthly streamflow forecasting approach is explored, whereby simulated soil moisture from the GR4J conceptual rainfall-runoff model is used to represent initial catchment conditions in a Bayesian artificial neural network (ANN) statistical forecasting model. To assess the performance of this hybrid forecasting method, a comparison is undertaken of the relative performances of the Bayesian ANN, the GR4J conceptual model and the hybrid streamflow forecasting approach for producing 1-month ahead streamflow forecasts at three key locations in the South East of South Australia. Particular attention is paid to the quantification of uncertainty in each of the forecast models and the potential for reducing forecast uncertainty by using the hybrid approach is considered. Case study results suggest that the hybrid models developed in this study are able to take advantage of the complementary strengths of both the ANN models and the GR4J conceptual models. This was particularly the case when forecasting high flows, where the hybrid models were shown to outperform the two individual modelling approaches in terms of the accuracy of the median forecasts, as well as reliability and resolution of the forecast distributions. In addition, the forecast distributions generated by the hybrid models were up to 8 times more precise than those based on climatology; thus, providing a significant improvement on the information currently available to decision makers.

  17. Bayesian nonlinear filtering using quadrature and cubature rules applied to sensor data fusion for positioning

    OpenAIRE

    Fernandez Prades, Carles; Vilà Valls, Jordi

    2010-01-01

    This paper shows the applicability of recently-developed Gaussian nonlinear filters to sensor data fusion for positioning purposes. After providing a brief review of Bayesian nonlinear filtering, we specially address square-root, derivative-free algorithms based on the Gaussian assumption and approximation rules for numerical integration, namely the Gauss-Hermite quadrature rule and the cubature rule. Then, we propose a motion model based on the observations taken by an Inertial Measurement U...

  18. Bayesian estimation for a parametric Markov Renewal model applied to seismic data

    OpenAIRE

    Epifani, I.; Ladelli, L.; Pievatolo, A.

    2014-01-01

    This paper presents a complete methodology for Bayesian inference on a semi-Markov process, from the elicitation of the prior distribution, to the computation of posterior summaries, including a guidance for its implementation. The inter-occurrence times (conditional on the transition between two given states) are assumed to be Weibull-distributed. We examine the elicitation of the joint prior density of the shape and scale parameters of the Weibull distributions, deriving a specific class of...

  19. Bayesian Statistical Analysis Applied to NAA Data for Neutron Flux Spectrum Determination

    Science.gov (United States)

    Chiesa, D.; Previtali, E.; Sisti, M.

    2014-04-01

    In this paper, we present a statistical method, based on Bayesian statistics, to evaluate the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation analysis (NAA) experiment [A. Borio di Tigliole et al., Absolute flux measurement by NAA at the Pavia University TRIGA Mark II reactor facilities, ENC 2012 - Transactions Research Reactors, ISBN 978-92-95064-14-0, 22 (2012)] performed at the TRIGA Mark II reactor of Pavia University (Italy). In order to evaluate the neutron flux spectrum, subdivided in energy groups, we must solve a system of linear equations containing the grouped cross sections and the activation rate data. We solve this problem with Bayesian statistical analysis, including the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, is used to define the problem statistical model and solve it. The energy group fluxes and their uncertainties are then determined with great accuracy and the correlations between the groups are analyzed. Finally, the dependence of the results on the prior distribution choice and on the group cross section data is investigated to confirm the reliability of the analysis.

  20. Neural net classification of the γ - and p - images registered with atmospheric Cherenkov technique, random search learning in feed-forward networks

    International Nuclear Information System (INIS)

    A new method of data analysis, based on mathematical models of neural nets (artificial neural nets) is applied for background rejection in γ-ray astronomy experiments. The result proofs advantages of proposed technique compared to Bayesian approach

  1. Neural Implementation of Probabilistic Models of Cognition

    OpenAIRE

    Kharratzadeh, Milad; Shultz, Thomas R.

    2015-01-01

    Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes' rule to find the best explanation for available data. Understanding the neural mechanisms underlying probabilistic models remains important because Bayesian models provide a computational framework, rather than specifying mechanistic processes. Here, we propose a deterministic neural-network model which estimates and represents probability distributions f...

  2. A neural network applied to estimate Burr XII distribution parameters

    International Nuclear Information System (INIS)

    The Burr XII distribution can closely approximate many other well-known probability density functions such as the normal, gamma, lognormal, exponential distributions as well as Pearson type I, II, V, VII, IX, X, XII families of distributions. Considering a wide range of shape and scale parameters of the Burr XII distribution, it can have an important role in reliability modeling, risk analysis and process capability estimation. However, estimating parameters of the Burr XII distribution can be a complicated task and the use of conventional methods such as maximum likelihood estimation (MLE) and moment method (MM) is not straightforward. Some tables to estimate Burr XII parameters have been provided by Burr (1942) but they are not adequate for many purposes or data sets. Burr tables contain specific values of skewness and kurtosis and their corresponding Burr XII parameters. Using interpolation or extrapolation to estimate other values may provide inappropriate estimations. In this paper, we present a neural network to estimate Burr XII parameters for different values of skewness and kurtosis as inputs. A trained network is presented, and one can use it without previous knowledge about neural networks to estimate Burr XII distribution parameters. Accurate estimation of the Burr parameters is an extension of simulation studies.

  3. Neural network applied to elemental archaeological Marajoara ceramic compositions

    International Nuclear Information System (INIS)

    In the last decades several analytical techniques have been used in archaeological ceramics studies. However, instrumental neutron activation analysis, INAA, employing gamma-ray spectrometry seems to be the most suitable technique because it is a simple analytical method in its purely instrumental form. The purpose of this work was to determine the concentration of Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, and Zn in 160 original marajoara ceramic fragments by INAA. Marajoara ceramics culture was sophisticated and well developed. This culture reached its peak during the V and XIV centuries in Marajo Island located on the Amazon River delta area in Brazil. The purpose of the quantitative data was to identify compositionally homogeneous groups within the database. Having this in mind, the data set was first converted to base-10 logarithms to compensate for the differences in magnitude between major elements and trace elements, and also to yield a closer to normal distribution for several trace elements. After that, the data were analyzed using the Mahalanobis distance and using the lambda Wilks as critical value to identify the outliers. The similarities among the samples were studied by means of cluster analysis, principal components analysis and discriminant analysis. Additional confirmation of these groups was made by using elemental concentration bivariate plots. The results showed that there were two very well defined groups in the data set. In addition, the database was studied using artificial neural network with unsupervised learning strategy known as self-organizing maps to classify the marajoara ceramics. The experiments carried out showed that self-organizing maps artificial neural network is capable of discriminating ceramic fragments like multivariate statistical methods, and, again the results showed that the database was formed by two groups. (author)

  4. Neural network applied to elemental archaeological Marajoara ceramic compositions

    Energy Technology Data Exchange (ETDEWEB)

    Toyota, Rosimeiri G.; Munita, Casimiro S., E-mail: rosimeiritoy@yahoo.com.b, E-mail: camunita@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Boscarioli, Clodis, E-mail: boscarioli@gmail.co [Universidade Estadual do Oeste do Parana, Cascavel, PR (Brazil). Centro de Ciencias Exatas e Tecnologicas. Colegiado de Informatica; Hernandez, Emilio D.M., E-mail: boscarioli@gmail.co [Universidade de Sao Paulo (USP), SP (Brazil). Escola Politecnica; Neves, Eduardo G.; Demartini, Celia C., E-mail: eduardo@pq.cnpq.b [Museu de Arqueologia e Etnologia (MAE/USP), Sao Paulo, SP (Brazil)

    2009-07-01

    In the last decades several analytical techniques have been used in archaeological ceramics studies. However, instrumental neutron activation analysis, INAA, employing gamma-ray spectrometry seems to be the most suitable technique because it is a simple analytical method in its purely instrumental form. The purpose of this work was to determine the concentration of Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, Ta, Tb, Th, U, Yb, and Zn in 160 original marajoara ceramic fragments by INAA. Marajoara ceramics culture was sophisticated and well developed. This culture reached its peak during the V and XIV centuries in Marajo Island located on the Amazon River delta area in Brazil. The purpose of the quantitative data was to identify compositionally homogeneous groups within the database. Having this in mind, the data set was first converted to base-10 logarithms to compensate for the differences in magnitude between major elements and trace elements, and also to yield a closer to normal distribution for several trace elements. After that, the data were analyzed using the Mahalanobis distance and using the lambda Wilks as critical value to identify the outliers. The similarities among the samples were studied by means of cluster analysis, principal components analysis and discriminant analysis. Additional confirmation of these groups was made by using elemental concentration bivariate plots. The results showed that there were two very well defined groups in the data set. In addition, the database was studied using artificial neural network with unsupervised learning strategy known as self-organizing maps to classify the marajoara ceramics. The experiments carried out showed that self-organizing maps artificial neural network is capable of discriminating ceramic fragments like multivariate statistical methods, and, again the results showed that the database was formed by two groups. (author)

  5. Applying deep neural networks to HEP job classification

    Science.gov (United States)

    Wang, L.; Shi, J.; Yan, X.

    2015-12-01

    The cluster of IHEP computing center is a middle-sized computing system which provides 10 thousands CPU cores, 5 PB disk storage, and 40 GB/s IO throughput. Its 1000+ users come from a variety of HEP experiments. In such a system, job classification is an indispensable task. Although experienced administrator can classify a HEP job by its IO pattern, it is unpractical to classify millions of jobs manually. We present how to solve this problem with deep neural networks in a supervised learning way. Firstly, we built a training data set of 320K samples by an IO pattern collection agent and a semi-automatic process of sample labelling. Then we implemented and trained DNNs models with Torch. During the process of model training, several meta-parameters was tuned with cross-validations. Test results show that a 5- hidden-layer DNNs model achieves 96% precision on the classification task. By comparison, it outperforms a linear model by 8% precision.

  6. A SIMULATION OF THE PENICILLIN G PRODUCTION BIOPROCESS APPLYING NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    A.J.G. da Cruz

    1997-12-01

    Full Text Available The production of penicillin G by Penicillium chrysogenum IFO 8644 was simulated employing a feedforward neural network with three layers. The neural network training procedure used an algorithm combining two procedures: random search and backpropagation. The results of this approach were very promising, and it was observed that the neural network was able to accurately describe the nonlinear behavior of the process. Besides, the results showed that this technique can be successfully applied to control process algorithms due to its long processing time and its flexibility in the incorporation of new data

  7. Bayesian Source Separation Applied to Identifying Complex Organic Molecules in Space

    CERN Document Server

    Knuth, Kevin H; Choinsky, Joshua; Maunu, Haley A; Carbon, Duane F

    2014-01-01

    Emission from a class of benzene-based molecules known as Polycyclic Aromatic Hydrocarbons (PAHs) dominates the infrared spectrum of star-forming regions. The observed emission appears to arise from the combined emission of numerous PAH species, each with its unique spectrum. Linear superposition of the PAH spectra identifies this problem as a source separation problem. It is, however, of a formidable class of source separation problems given that different PAH sources potentially number in the hundreds, even thousands, and there is only one measured spectral signal for a given astrophysical site. Fortunately, the source spectra of the PAHs are known, but the signal is also contaminated by other spectral sources. We describe our ongoing work in developing Bayesian source separation techniques relying on nested sampling in conjunction with an ON/OFF mechanism enabling simultaneous estimation of the probability that a particular PAH species is present and its contribution to the spectrum.

  8. Bayesian artificial intelligence

    CERN Document Server

    Korb, Kevin B

    2010-01-01

    Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology.New to the Second EditionNew chapter on Bayesian network classifiersNew section on object-oriente

  9. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Cai, C. [CEA, LIST, 91191 Gif-sur-Yvette, France and CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Rodet, T.; Mohammad-Djafari, A. [CNRS, SUPELEC, UNIV PARIS SUD, L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (France); Legoupil, S. [CEA, LIST, 91191 Gif-sur-Yvette (France)

    2013-11-15

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  10. Apply Multi-Layer Perceptrons Neural Network for Off-Line Signature Verification and Recognition

    Directory of Open Access Journals (Sweden)

    Suhail Odeh

    2011-11-01

    Full Text Available This paper discusses the applying of Multi-layer perceptrons for signature verification and recognition using a new approach enables the user to recognize whether a signature is original or a fraud. The approach starts by scanning images into the computer, then modifying their quality through image enhancement and noise reduction, followed by feature extraction and neural network training, and finally verifies the authenticity of the signature. The paper discusses the different stages of the process including: image pre-processing, feature extraction and pattern recognition through neural networks.

  11. The Bayesian statistical decision theory applied to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs

  12. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    International Nuclear Information System (INIS)

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses

  13. Artificial neural networks applied to quantitative elemental analysis of organic material using PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Correa, R. [Universidad Tecnologica Metropolitana, Departamento de Fisica, Av. Jose Pedro Alessandri 1242, Nunoa, Santiago (Chile)]. E-mail: rcorrea@utem.cl; Chesta, M.A. [Universidad Nacional de Cordoba, Facultad de Matematica, Astronomia y Fisica, Medina Allende s/n Ciudad Universitaria, 5000 Cordoba (Argentina)]. E-mail: chesta@famaf.unc.edu.ar; Morales, J.R. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: rmorales@uchile.cl; Dinator, M.I. [Universidad de Chile, Facultad de Ciencias, Departamento de Fisica, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: mdinator@uchile.cl; Requena, I. [Universidad de Granada, Departamento de Ciencias de la Computacion e Inteligencia Artificial, Daniel Saucedo Aranda s/n, 18071 Granada (Spain)]. E-mail: requena@decsai.ugr.es; Vila, I. [Universidad de Chile, Facultad de Ciencias, Departamento de Ecologia, Las Palmeras 3425, Nunoa, Santiago (Chile)]. E-mail: limnolog@uchile.cl

    2006-08-15

    An artificial neural network (ANN) has been trained with real-sample PIXE (particle X-ray induced emission) spectra of organic substances. Following the training stage ANN was applied to a subset of similar samples thus obtaining the elemental concentrations in muscle, liver and gills of Cyprinus carpio. Concentrations obtained with the ANN method are in full agreement with results from one standard analytical procedure, showing the high potentiality of ANN in PIXE quantitative analyses.

  14. Applying the multivariate time-rescaling theorem to neural population models.

    Science.gov (United States)

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-06-01

    Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test. PMID:21395436

  15. Identification of the neural component of torque during manually-applied spasticity assessments in children with cerebral palsy

    OpenAIRE

    Bar-On, Lynn; Desloovere, Kaat; Molenaers, Guy; Harlaar, J.; Kindt, T; Aertbeliën, Erwin

    2014-01-01

    Clinical assessment of spasticity is compromised by the difficulty to distinguish neural from non-neural components of increased joint torque. Quantifying the contributions of each of these components is crucial to optimize the selection of anti-spasticity treatments such as botulinum toxin (BTX). The aim of this study was to compare different biomechanical parameters that quantify the neural contribution to ankle joint torque measured during manually-applied passive stretches to the gastrocs...

  16. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Gean Ribeiro dos; Andrade, Delvonei Alves de; Pereira, Iraci Martinez, E-mail: gean@usp.br, E-mail: delvonei@ipen.br, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  17. Ant colony optimization and neural networks applied to nuclear power plant monitoring

    International Nuclear Information System (INIS)

    A recurring challenge in production processes is the development of monitoring and diagnosis systems. Those systems help on detecting unexpected changes and interruptions, preventing losses and mitigating risks. Artificial Neural Networks (ANNs) have been extensively used in creating monitoring systems. Usually the ANNs created to solve this kind of problem are created by taking into account only parameters as the number of inputs, outputs, and hidden layers. The result networks are generally fully connected and have no improvements in its topology. This work intends to use an Ant Colony Optimization (ACO) algorithm to create a tuned neural network. The ACO search algorithm will use Back Error Propagation (BP) to optimize the network topology by suggesting the best neuron connections. The result ANN will be applied to monitoring the IEA-R1 research reactor at IPEN. (author)

  18. Classification of brain compartments and head injury lesions by neural networks applied to MRI

    International Nuclear Information System (INIS)

    An automatic, neural network-based approach was applied to segment normal brain compartments and lesions on MR images. Two supervised networks, backpropagation (BPN) and counterpropagation, and two unsupervised networks, Kohonen learning vector quantizer and analog adaptive resonance theory, were trained on registered T2-weighted and proton density images. The classes of interest were background, gray matter, white matter, cerebrospinal fluid, macrocystic encephalomalacia, gliosis, and 'unknown'. A comprehensive feature vector was chosen to discriminate these classes. The BPN combined with feature conditioning, multiple discriminant analysis followed by Hotelling transform, produced the most accurate and consistent classification results. Classifications of normal brain compartments were generally in agreement with expert interpretation of the images. Macrocystic encephalomalacia and gliosis were recognized and, except around the periphery, classified in agreement with the clinician's report used to train the neural network. (orig.)

  19. Classical and Bayesian estimation in the logistic regression model applied to diagnosis of child attention deficit hyperactivity disorder.

    Science.gov (United States)

    Gordóvil-Merino, Amalia; Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; de la Fuente-Solanas, Emilia I

    2010-04-01

    The limitations inherent to classical estimation of the logistic regression models are known. The Bayesian approach in statistical analysis is an alternative to be considered, given that it makes it possible to introduce prior information about the phenomenon under study. The aim of the present work is to analyze binary and multinomial logistic regression simple models estimated by means of a Bayesian approach in comparison to classical estimation. To that effect, Child Attention Deficit Hyperactivity Disorder (ADHD) clinical data were analyzed. The sample included 286 participants of 6-12 years (78% boys, 22% girls) with ADHD positive diagnosis in 86.7% of the cases. The results show a reduction of standard errors associated to the coefficients obtained from the Bayesian analysis, thus bringing a greater stability to the coefficients. Complex models where parameter estimation may be easily compromised could benefit from this advantage. PMID:20524554

  20. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  1. The Bayesian Bootstrap

    OpenAIRE

    Rubin, Donald B.

    1981-01-01

    The Bayesian bootstrap is the Bayesian analogue of the bootstrap. Instead of simulating the sampling distribution of a statistic estimating a parameter, the Bayesian bootstrap simulates the posterior distribution of the parameter; operationally and inferentially the methods are quite similar. Because both methods of drawing inferences are based on somewhat peculiar model assumptions and the resulting inferences are generally sensitive to these assumptions, neither method should be applied wit...

  2. A variational Bayesian approach for unsupervised super-resolution using mixture models of point and smooth sources applied to astrophysical map-making

    International Nuclear Information System (INIS)

    We present, in this paper, a new unsupervised method for joint image super-resolution and separation between smooth and point sources. For this purpose, we propose a Bayesian approach with a Markovian model for the smooth part and Student’s t-distribution for point sources. All model and noise parameters are considered unknown and should be estimated jointly with images. However, joint estimators (joint MAP or posterior mean) are intractable and an approximation is needed. Therefore, a new gradient-like variational Bayesian method is applied to approximate the true posterior by a free-form separable distribution. A parametric form is obtained by approximating marginals but with form parameters that are mutually dependent. Their optimal values are achieved by iterating them till convergence. The method was tested by the model-generated data and a real dataset from the Herschel space observatory. (paper)

  3. Projection of future climate change conditions using IPCC simulations, neural networks and Bayesian statistics. Part 2: Precipitation mean state and seasonal cycle in South America

    Science.gov (United States)

    Boulanger, Jean-Philippe; Martinez, Fernando; Segura, Enrique C.

    2007-02-01

    Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the “state-of-the-art” of

  4. On Fuzzy Bayesian Inference

    OpenAIRE

    Frühwirth-Schnatter, Sylvia

    1990-01-01

    In the paper at hand we apply it to Bayesian statistics to obtain "Fuzzy Bayesian Inference". In the subsequent sections we will discuss a fuzzy valued likelihood function, Bayes' theorem for both fuzzy data and fuzzy priors, a fuzzy Bayes' estimator, fuzzy predictive densities and distributions, and fuzzy H.P.D .-Regions. (author's abstract)

  5. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus Populations from Pup Counts

    Directory of Open Access Journals (Sweden)

    Mike Lonergan

    2011-01-01

    Full Text Available For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. We present an approximate Bayesian method for fitting pup trajectories, estimating adult population size and investigating alternative biological models. The method is equivalent to fitting a density-dependent Leslie matrix model, within a Bayesian framework, but with the forms of the density-dependent effects as outputs rather than assumptions. It requires fewer assumptions than the state space models currently used and produces similar estimates. We discuss the potential and limitations of the method and suggest that this approach provides a useful tool for at least the preliminary analysis of similar datasets.

  6. An Approximate Bayesian Method Applied to Estimating the Trajectories of Four British Grey Seal (Halichoerus grypus) Populations from Pup Counts

    OpenAIRE

    Mike Lonergan; Dave Thompson; Len Thomas; Callan Duck

    2011-01-01

    1. For British grey seals, as with many pinniped species, population monitoring is implemented by aerial surveys of pups at breeding colonies. Scaling pup counts up to population estimates requires assumptions about population structure; this is straightforward when populations are growing exponentially, but not when growth slows, since it is unclear whether density dependence affects pup survival or fecundity. 2. We present an approximate Bayesian method for fitting pup trajectories, estimat...

  7. Projection of future climate change conditions using IPCC simulations, neural networks and Bayesian statistics. Part 2: Precipitation mean state and seasonal cycle in South America

    Energy Technology Data Exchange (ETDEWEB)

    Boulanger, Jean-Philippe [LODYC, UMR CNRS/IRD/UPMC, Tour 45-55/Etage 4/Case 100, UPMC, Paris Cedex 05 (France); University of Buenos Aires, Departamento de Ciencias de la Atmosfera y los Oceanos, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina); Martinez, Fernando; Segura, Enrique C. [University of Buenos Aires, Departamento de Computacion, Facultad de Ciencias Exactas y Naturales, Buenos Aires (Argentina)

    2007-02-15

    Evaluating the response of climate to greenhouse gas forcing is a major objective of the climate community, and the use of large ensemble of simulations is considered as a significant step toward that goal. The present paper thus discusses a new methodology based on neural network to mix ensemble of climate model simulations. Our analysis consists of one simulation of seven Atmosphere-Ocean Global Climate Models, which participated in the IPCC Project and provided at least one simulation for the twentieth century (20c3m) and one simulation for each of three SRES scenarios: A2, A1B and B1. Our statistical method based on neural networks and Bayesian statistics computes a transfer function between models and observations. Such a transfer function was then used to project future conditions and to derive what we would call the optimal ensemble combination for twenty-first century climate change projections. Our approach is therefore based on one statement and one hypothesis. The statement is that an optimal ensemble projection should be built by giving larger weights to models, which have more skill in representing present climate conditions. The hypothesis is that our method based on neural network is actually weighting the models that way. While the statement is actually an open question, which answer may vary according to the region or climate signal under study, our results demonstrate that the neural network approach indeed allows to weighting models according to their skills. As such, our method is an improvement of existing Bayesian methods developed to mix ensembles of simulations. However, the general low skill of climate models in simulating precipitation mean climatology implies that the final projection maps (whatever the method used to compute them) may significantly change in the future as models improve. Therefore, the projection results for late twenty-first century conditions are presented as possible projections based on the &apos

  8. Calcium dependent plasticity applied to repetitive transcranial magnetic stimulation with a neural field model.

    Science.gov (United States)

    Wilson, M T; Fung, P K; Robinson, P A; Shemmell, J; Reynolds, J N J

    2016-08-01

    The calcium dependent plasticity (CaDP) approach to the modeling of synaptic weight change is applied using a neural field approach to realistic repetitive transcranial magnetic stimulation (rTMS) protocols. A spatially-symmetric nonlinear neural field model consisting of populations of excitatory and inhibitory neurons is used. The plasticity between excitatory cell populations is then evaluated using a CaDP approach that incorporates metaplasticity. The direction and size of the plasticity (potentiation or depression) depends on both the amplitude of stimulation and duration of the protocol. The breaks in the inhibitory theta-burst stimulation protocol are crucial to ensuring that the stimulation bursts are potentiating in nature. Tuning the parameters of a spike-timing dependent plasticity (STDP) window with a Monte Carlo approach to maximize agreement between STDP predictions and the CaDP results reproduces a realistically-shaped window with two regions of depression in agreement with the existing literature. Developing understanding of how TMS interacts with cells at a network level may be important for future investigation. PMID:27259518

  9. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  10. Linear and nonlinear modeling of antifungal activity of some heterocyclic ring derivatives using multiple linear regression and Bayesian-regularized neural networks.

    Science.gov (United States)

    Caballero, Julio; Fernández, Michael

    2006-01-01

    Antifungal activity was modeled for a set of 96 heterocyclic ring derivatives (2,5,6-trisubstituted benzoxazoles, 2,5-disubstituted benzimidazoles, 2-substituted benzothiazoles and 2-substituted oxazolo(4,5-b)pyridines) using multiple linear regression (MLR) and Bayesian-regularized artificial neural network (BRANN) techniques. Inhibitory activity against Candida albicans (log(1/C)) was correlated with 3D descriptors encoding the chemical structures of the heterocyclic compounds. Training and test sets were chosen by means of k-Means Clustering. The most appropriate variables for linear and nonlinear modeling were selected using a genetic algorithm (GA) approach. In addition to the MLR equation (MLR-GA), two nonlinear models were built, model BRANN employing the linear variable subset and an optimum model BRANN-GA obtained by a hybrid method that combined BRANN and GA approaches (BRANN-GA). The linear model fit the training set (n = 80) with r2 = 0.746, while BRANN and BRANN-GA gave higher values of r2 = 0.889 and r2 = 0.937, respectively. Beyond the improvement of training set fitting, the BRANN-GA model was superior to the others by being able to describe 87% of test set (n = 16) variance in comparison with 78 and 81% the MLR-GA and BRANN models, respectively. Our quantitative structure-activity relationship study suggests that the distributions of atomic mass, volume and polarizability have relevant relationships with the antifungal potency of the compounds studied. Furthermore, the ability of the six variables selected nonlinearly to differentiate the data was demonstrated when the total data set was well distributed in a Kohonen self-organizing neural network (KNN). PMID:16205958

  11. Bayesian exploratory factor analysis

    OpenAIRE

    Gabriella Conti; Sylvia Frühwirth-Schnatter; James Heckman; Rémi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identifi cation criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study c...

  12. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  13. Bayesian Exploratory Factor Analysis

    OpenAIRE

    Gabriella Conti; Sylvia Fruehwirth-Schnatter; Heckman, James J.; Remi Piatek

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on \\emph{ad hoc} classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo s...

  14. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo st...

  15. Bayesian exploratory factor analysis

    OpenAIRE

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study co...

  16. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    International Nuclear Information System (INIS)

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  17. GMDH and neural networks applied in monitoring and fault detection in sensors in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil); Pereira, Iraci Martinez; Silva, Antonio Teixeira e, E-mail: martinez@ipen.b, E-mail: teixeira@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    In this work a new monitoring and fault detection methodology was developed using GMDH (Group Method of Data Handling) algorithm and artificial neural networks (ANNs) which was applied in the IEA-R1 research reactor at IPEN. The monitoring and fault detection system was developed in two parts: the first was dedicated to preprocess information, using GMDH algorithm; and the second to the process information using ANNs. The preprocess information was divided in two parts. In the first part, the GMDH algorithm was used to generate a better database estimate, called matrix z, which was used to train the ANNs. In the second part the GMDH was used to study the best set of variables to be used to train the ANNs, resulting in a best monitoring variable estimative. The methodology was developed and tested using five different models: one theoretical model and for models using different sets of reactor variables. After an exhausting study dedicated to the sensors monitoring, the fault detection in sensors was developed by simulating faults in the sensors database using values of +5%, +10%, +15% and +20% in these sensors database. The good results obtained through the present methodology shows the viability of using GMDH algorithm in the study of the best input variables to the ANNs, thus making possible the use of these methods in the implementation of a new monitoring and fault detection methodology applied in sensors. (author)

  18. Identification of thin elastic isotropic plate parameters applying Guided Wave Measurement and Artificial Neural Networks

    Science.gov (United States)

    Pabisek, Ewa; Waszczyszyn, Zenon

    2015-12-01

    A new hybrid computational system for material identification (HCSMI) is presented, developed for the identification of homogeneous, elastic, isotropic plate parameters. Attention is focused on the construction of dispersion curves, related to Lamb waves. The main idea of the system HCSMI lies in separation of two essential basic computational stages, corresponding to direct or inverse analyses. In the frame of the first stage an experimental dispersion curve DCexp is constructed, applying Guided Wave Measurement (GWM) technique. Then, in the other stage, corresponding to the inverse analysis, an Artificial Neural Network (ANN) is trained 'off line'. The substitution of results of the first stage, treated as inputs of the ANN, gives the values of identified plate parameters. In such a way no iteration is needed, unlike to the classical approach. In such an approach, the "distance" between the approximate experimental curves DCexp and dispersion curves DCnum obtained in the direct analysis, is iteratively minimized. Two case studies are presented, corresponding either to measurements in laboratory tests or those related to pseudo-experimental noisy data of computer simulations. The obtained results prove high numerical efficiency of HCSMI, applied to the identification of aluminum plate parameters.

  19. A Hybrid Applied Optimization Algorithm for Training Multi-Layer Neural Networks in the Data Classification

    OpenAIRE

    ÖRKÇÜ, H. Hasan; Mustafa İsa DOĞAN; Örkçü, Mediha

    2015-01-01

    Backpropagation algorithm is classical technique used in the training of the artificial neural networks. Since this algorithm has many disadvantages, the training of the neural networks has been implemented with various optimization methods. In this paper, a hybrid intelligent model, i.e., hybridGSA, is developed to training artificial neural networks (ANN) and undertaking data classification problems. The hybrid intelligent system aims to exploit the advantages of genetic and simulated annea...

  20. Applying long short-term memory recurrent neural networks to intrusion detection

    Directory of Open Access Journals (Sweden)

    Ralf C. Staudemeyer

    2015-07-01

    Full Text Available We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing two cells each offer a good compromise between computational cost and detection performance. We applied forget gates and shortcut connections respectively. A learning rate of 0.1 and up to 1,000 epochs showed good results. We tested the performance on all features and on extracted minimal feature sets respectively. We evaluated different feature sets for the detection of all attacks within one network and also to train networks specialised on individual attack classes. Our results show that the LSTM classifier provides superior performance in comparison to results previously published results of strong static classifiers. With 93.82% accuracy and 22.13 cost, LSTM outperforms the winning entries of the KDD Cup ’99 challenge by far. This is due to the fact that LSTM learns to look back in time and correlate consecutive connection records. For the first time ever, we have demonstrated the usefulness of LSTM networks to intrusion detection.

  1. Towards a Supra-Bayesian Approach to Merging of Information

    Czech Academy of Sciences Publication Activity Database

    Sečkárová, Vladimíra

    Prague: Institute of Information Theory and Automation, 2011, s. 81-86. ISBN 978-80-903834-6-3. [The 2nd International Workshop od Decision Making with Multiple Imperfect Decision Makers. Held in Conjunction with the 25th Annual Conference on Neural Information Processing Systems (NIPS 2011). Sierra Nevada (ES), 16.12.2011-16.12.2011] R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : decision makers * Supra-Bayesian * Bayesian solution * Merging Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/seckarova-towards a supra-bayesian approach to merging of information.pdf

  2. A Bayesian two part model applied to analyze risk factors of adult mortality with application to data from Namibia.

    Directory of Open Access Journals (Sweden)

    Lawrence N Kazembe

    Full Text Available Despite remarkable gains in life expectancy and declining mortality in the 21st century, in many places mostly in developing countries, adult mortality has increased in part due to HIV/AIDS or continued abject poverty levels. Moreover many factors including behavioural, socio-economic and demographic variables work simultaneously to impact on risk of mortality. Understanding risk factors of adult mortality is crucial towards designing appropriate public health interventions. In this paper we proposed a structured additive two-part random effects regression model for adult mortality data. Our proposal assumed two processes: (i whether death occurred in the household (prevalence part, and (ii number of reported deaths, if death did occur (severity part. The proposed model specification therefore consisted of two generalized linear mixed models (GLMM with correlated random effects that permitted structured and unstructured spatial components at regional level. Specifically, the first part assumed a GLMM with a logistic link and the second part explored a count model following either a Poisson or negative binomial distribution. The model was used to analyse adult mortality data of 25,793 individuals from the 2006/2007 Namibian DHS data. Inference is based on the Bayesian framework with appropriate priors discussed.

  3. Implementation of Artificial Neural Network applied for the solution of inverse kinematics of 2-link serial chain manipulator.

    Directory of Open Access Journals (Sweden)

    Satish Kumar

    2012-09-01

    Full Text Available In this study, a method of artificial neural network applied for the solution of inverse kinematics of 2-link serial chain manipulator. The method is multilayer perceptrons neural network has applied. This unsupervised method learns the functional relationship between input (Cartesian space and output (joint space based on a localized adaptation of the mapping, by using the manipulator itself under joint control and adapting the solution based on a comparison between the resulting locations of the manipulator's end effectors in Cartesian space with the desired location. Even when a manipulator is not available; the approach is still valid if the forward kinematic equations are used as a model of the manipulator. The forward kinematic equations always have a unique solution, and the resulting Neural net can be used as a starting point for further refinement when the manipulator does become available. Artificial neural network especially MLP are used to learn the forward and the inverse kinematic equations of two degrees freedom robot arm. A set of some data sets were first generated as per the formula equation for this the input parameter X and Y coordinates in inches. Using these data sets was basis for the training and evaluation or testing the MLP model. Out of the sets data points, maximum were used as training data and some were used for testing for MLP. Backpropagation algorithm was used for training the network and for updating the desired weights. In this work epoch based training method was applied.

  4. Boltzmann learning of parameters in cellular neural networks

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    1992-01-01

    The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified ...... unsupervised adaptation of an image segmentation cellular network. The learning rule is applied to adaptive segmentation of satellite imagery......The use of Bayesian methods to design cellular neural networks for signal processing tasks and the Boltzmann machine learning rule for parameter estimation is discussed. The learning rule can be used for models with hidden units, or for completely unsupervised learning. The latter is exemplified by...

  5. Application of Bayesian regularized BP neural network model for analysis of aquatic ecological data--A case study of chlorophyll-a prediction in Nanzui water area of Dongting Lake

    Institute of Scientific and Technical Information of China (English)

    XU Min; ZENG Guang-ming; XU Xin-yi; HUANG Guo-he; SUN Wei; JIANG Xiao-yun

    2005-01-01

    Bayesian regularized BP neural network(BRBPNN) technique was applied in the chlorophyll-a prediction of Nanzui water area in Dongting Lake. Through BP network interpolation method, the input and output samples of the network were obtained. After the selection of input variables using stepwise/multiple linear regression method in SPSS 11.0 software, the BRBPNN model was established between chlorophyll-a and environmental parameters, biological parameters. The achieved optimal network structure was 3-11-1 with the correlation coefficients and the mean square errors for the training set and the test set as 0.999 and 0.00078426, 0.981 and 0.0216 respectively. The sum of square weights between each input neuron and the hidden layer of optimal BRBPNN models of different structures indicated that the effect of individual input parameter on chlorophyll-a declined in the order of alga amount > secchi disc depth(SD) > electrical conductivity (EC) . Additionally, it also demonstrated that the contributions of these three factors were the maximal for the change of chlorophyll-a concentration, total phosphorus(TP) and total nitrogen(TN) were the minimal. All the results showed that BRBPNN model was capable of automated regularization parameter selection and thus it may ensure the excellent generation ability and robustness. Thus, this study laid the foundation for the application of BRBPNN model in the analysis of aquatic ecological data(chlorophyll-a prediction) and the explanation about the effective eutrophication treatment measures for Nanzui water area in Dongting Lake.

  6. Novel MRI-derived quantitative biomarker for cardiac function applied to classifying ischemic cardiomyopathy within a Bayesian rule learning framework

    Science.gov (United States)

    Menon, Prahlad G.; Morris, Lailonny; Staines, Mara; Lima, Joao; Lee, Daniel C.; Gopalakrishnan, Vanathi

    2014-03-01

    Characterization of regional left ventricular (LV) function may have application in prognosticating timely response and informing choice therapy in patients with ischemic cardiomyopathy. The purpose of this study is to characterize LV function through a systematic analysis of 4D (3D + time) endocardial motion over the cardiac cycle in an effort to define objective, clinically useful metrics of pathological remodeling and declining cardiac performance, using standard cardiac MRI data for two distinct patient cohorts accessed from CardiacAtlas.org: a) MESA - a cohort of asymptomatic patients; and b) DETERMINE - a cohort of symptomatic patients with a history of ischemic heart disease (IHD) or myocardial infarction. The LV endocardium was segmented and a signed phase-to-phase Hausdorff distance (HD) was computed at 3D uniformly spaced points tracked on segmented endocardial surface contours, over the cardiac cycle. An LV-averaged index of phase-to-phase endocardial displacement (P2PD) time-histories was computed at each tracked point, using the HD computed between consecutive cardiac phases. Average and standard deviation in P2PD over the cardiac cycle was used to prepare characteristic curves for the asymptomatic and IHD cohort. A novel biomarker of RMS error between mean patient-specific characteristic P2PD over the cardiac cycle for each individual patient and the cumulative P2PD characteristic of a cohort of asymptomatic patients was established as the RMS-P2PD marker. The novel RMS-P2PD marker was tested as a cardiac function based feature for automatic patient classification using a Bayesian Rule Learning (BRL) framework. The RMS-P2PD biomarker indices were significantly different for the symptomatic patient and asymptomatic control cohorts (pcardiac performance.

  7. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  8. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  9. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  10. A Modular Neural Network Scheme Applied to Fault Diagnosis in Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Agustín Flores

    2014-01-01

    Full Text Available This work proposes a new method for fault diagnosis in electric power systems based on neural modules. With this method the diagnosis is performed by assigning a neural module for each type of component comprising the electric power system, whether it is a transmission line, bus or transformer. The neural modules for buses and transformers comprise two diagnostic levels which take into consideration the logic states of switches and relays, both internal and back-up, with the exception of the neural module for transmission lines which also has a third diagnostic level which takes into account the oscillograms of fault voltages and currents as well as the frequency spectrums of these oscillograms, in order to verify if the transmission line had in fact been subjected to a fault. One important advantage of the diagnostic system proposed is that its implementation does not require the use of a network configurator for the system; it does not depend on the size of the power network nor does it require retraining of the neural modules if the power network increases in size, making its application possible to only one component, a specific area, or the whole context of the power system.

  11. Development of an intelligent system for tool wear monitoring applying neural networks

    Directory of Open Access Journals (Sweden)

    A. Antić

    2005-12-01

    Full Text Available Purpose: The objective of the researches presented in the paper is to investigate, in laboratory conditions, the application possibilities of the proposed system for tool wear monitoring in hard turning, using modern tools and artificial intelligence (AI methods.Design/methodology/approach: On the basic theoretical principles and the use of computing methods of simulation and neural network training, as well as the conducted experiments, have been directed to investigate the adequacy of the setting.Findings: The paper presents tool wear monitoring for hard turning for certain types of neural network configurations where there are preconditions for up building with dynamic neural networks.Research limitations/implications: Future researches should include the integration of the proposed system into CNC machine, instead of the current separate system, which would provide synchronisation between the system and the machine, i.e. the appropriate reaction by the machine after determining excessive tool wear.Practical implications: Practical application of the conducted research is possible with certain restrictions and supplement of adequate number of experimental researches which would be directed towards certain combinations of machining materials and tools for which neural networks are trained.Originality/value: The contribution of the conducted research is observed in one possible view of the tool monitoring system model and it’s designing on modular principle, and principle building neural network.

  12. Applying Neural Network to Dynamic Modeling of Biosurfactant Production Using Soybean Oil Refinery Wastes

    Directory of Open Access Journals (Sweden)

    Shokoufe Tayyebi

    2013-01-01

    Full Text Available Biosurfactants are surface active compounds produced by various microorganisms. Production of biosurfactants via fermentation of immiscible wastes has the dual benefit of creating economic opportunities for manufacturers, while improving environmental health. A predictor system, recommended in such processes, must be scaled-up. Hence, four neural networks were developed for the dynamic modeling of the biosurfactant production kinetics, in presence of soybean oil or refinery wastes including acid oil, deodorizer distillate and soap stock. Each proposed feed forward neural network consists of three layers which are not fully connected. The input and output data for the training and validation of the neural network models were gathered from batch fermentation experiments. The proposed neural network models were evaluated by three statistical criteria (R2, RMSE and SE. The typical regression analysis showed high correlation coefficients greater than 0.971, demonstrating that the neural network is an excellent estimator for prediction of biosurfactant production kinetic data in a two phase liquid-liquid batch fermentation system. In addition, sensitivity analysis indicates that residual oil has the significant effect (i.e. 49% on the biosurfactant in the process.

  13. Sensitivity analysis by neural networks applied to power systems transient stability

    Energy Technology Data Exchange (ETDEWEB)

    Lotufo, Anna Diva P.; Lopes, Mara Lucia M.; Minussi, Carlos R. [Departamento de Engenharia Eletrica, UNESP, Campus de Ilha Solteira, Av. Brasil, 56, 15385-000 Ilha Solteira, SP (Brazil)

    2007-05-15

    This work presents a procedure for transient stability analysis and preventive control of electric power systems, which is formulated by a multilayer feedforward neural network. The neural network training is realized by using the back-propagation algorithm with fuzzy controller and adaptation of the inclination and translation parameters of the nonlinear function. These procedures provide a faster convergence and more precise results, if compared to the traditional back-propagation algorithm. The adaptation of the training rate is effectuated by using the information of the global error and global error variation. After finishing the training, the neural network is capable of estimating the security margin and the sensitivity analysis. Considering this information, it is possible to develop a method for the realization of the security correction (preventive control) for levels considered appropriate to the system, based on generation reallocation and load shedding. An application for a multimachine power system is presented to illustrate the proposed methodology. (author)

  14. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    Science.gov (United States)

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver. PMID:26340789

  15. Prediction of fracture toughness transition from tensile test data applying neural network

    Czech Academy of Sciences Publication Activity Database

    Dlouhý, Ivo; Hadraba, Hynek; Chlup, Zdeněk; Válka, Libor; Žák, L.

    Baltimore, Maryland : ASME, 2011. s. 1-6 R&D Projects: GA ČR(CZ) GAP108/10/0466 Institutional research plan: CEZ:AV0Z20410507 Keywords : Fracture toughness * Low alloy steel * Tensile test * Artificial neural network Subject RIV: JL - Materials Fatigue, Friction Mechanics

  16. Applying of neural networks in determination of replacement cycle of spare parts

    International Nuclear Information System (INIS)

    The article shows neural networks applicability to determine expected working time of equipment components before the damage. The results based on measure - simulated values of suggested model have been presented. Advantages of suggested model have been analyzed compared to traditional way of replacement of spare parts and components. Implementation possibility of suggested model in Management Information Maintenance System has been described. (author)

  17. RBF-Type Artificial Neural Network Model Applied in Alloy Design of Steels

    Institute of Scientific and Technical Information of China (English)

    YOU Wei; LIU Ya-xiu; BAI Bing-zhe; FANG Hong-sheng

    2008-01-01

    RBF model, a new type of artificial neural network model was developed to design the content of carbon in low-alloy engineering steels. The errors of the ANN model are. MSE 0. 052 1, MSRE 17. 85%, and VOF 1. 932 9. The results obtained are satisfactory. The method is a powerful aid for designing new steels.

  18. Multivariate Cross-Classification: Applying machine learning techniques to characterize abstraction in neural representations

    Directory of Open Access Journals (Sweden)

    Jonas eKaplan

    2015-03-01

    Full Text Available Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC, and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.

  19. Bayesian Exploratory Factor Analysis

    DEFF Research Database (Denmark)

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the...... corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...

  20. A NOVEL METHODOLOGY FOR CONSTRUCTING RULE-BASED NAÏVE BAYESIAN CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    Abdallah Alashqur

    2015-02-01

    Full Text Available Classification is an important data mining technique that is used by many applications. Several types of classifiers have been described in the research literature. Example classifiers are decision tree classifiers, rule-based classifiers, and neural networks classifiers. Another popular classification technique is naïve Bayesian classification. Naïve Bayesian classification is a probabilistic classification approach that uses Bayesian Theorem to predict the classes of unclassified records. A drawback of Naïve Bayesian Classification is that every time a new data record is to be classified, the entire dataset needs to be scanned in order to apply a set of equations that perform the classification. Scanning the dataset is normally a very costly step especially if the dataset is very large. To alleviate this problem, a new approach for using naïve Bayesian classification is introduced in this study. In this approach, a set of classification rules is constructed on top of naïve Bayesian classifier. Hence we call this approach Rule-based Naïve Bayesian Classifier (RNBC. In RNBC, the dataset is canned only once, off-line, at the time of building the classification rule set. Subsequent scanning of the dataset, is avoided. Furthermore, this study introduces a simple three-step methodology for constructing the classification rule set.

  1. Willingness to purchase Genetically Modified food: an analysis applying artificial Neural Networks

    OpenAIRE

    Salazar-Ordóñez, M.; Rodríguez-Entrena, M.; Becerra-Alonso, D.

    2014-01-01

    Findings about consumer decision-making process regarding GM food purchase remain mixed and are inconclusive. This paper offers a model which classifies willingness to purchase GM food, using data from 399 surveys in Southern Spain. Willingness to purchase has been measured using three dichotomous questions and classification, based on attitudinal, cognitive and socio-demographic factors, has been made by an artificial neural network model. The results show 74% accuracy to forecast the willin...

  2. Applying long short-term memory recurrent neural networks to intrusion detection

    OpenAIRE

    Ralf C. Staudemeyer

    2015-01-01

    We claim that modelling network traffic as a time series with a supervised learning approach, using known genuine and malicious behaviour, improves intrusion detection. To substantiate this, we trained long short-term memory (LSTM) recurrent neural networks with the training data provided by the DARPA / KDD Cup ’99 challenge. To identify suitable LSTM-RNN network parameters and structure we experimented with various network topologies. We found networks with four memory blocks containing ...

  3. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations

    OpenAIRE

    Jonas Kaplan; Steve Grant Greening

    2015-01-01

    Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it...

  4. New Statistical Technologies applied to the estimation of the free Housing Prices: Artificial Neural Networks

    OpenAIRE

    J.Maria Mont Lorenzo

    2001-01-01

    The aim of this research is the use of the artificial neural networks models, specifically Multilayer Perceptrons trained by the algorithm known as Backpropagation to estimate the free housing prices. This methodology allows, through the training of the backpropagated nets, to estimate the houses prices on the basis of some variables, related to the houses, which are considered relevant (location, age, surface, quality, ...), overcoming the linear restrictions characteristic of the traditiona...

  5. A neural fuzzy model applied to hydrogen peroxide bleaching of non-wood soda pulps

    OpenAIRE

    Rosal, Antonio; Valls Vidal, Cristina; Ferrer, Ana; Rodríguez, Alejandro

    2012-01-01

    A neural fuzzy model was used to examine the influence of pulp bleaching variables of empty fruit bunches from oil palm (EFB) and Hesperaloe funifera, such as soda concentration (0.5-3%), hydrogen peroxide concentration (1-10%) and processing time (1-3 h), on Kappa number, brightness and viscosity. The experimental results are reproduced with errors below 10% and 15% for EFB and H. funifera, respectively. Bleaching pulp simulation permits to obtain optimal values of the operating variables, s...

  6. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  7. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  8. Bayesian biostatistics

    CERN Document Server

    Lesaffre, Emmanuel

    2012-01-01

    The growth of biostatistics has been phenomenal in recent years and has been marked by considerable technical innovation in both methodology and computational practicality. One area that has experienced significant growth is Bayesian methods. The growing use of Bayesian methodology has taken place partly due to an increasing number of practitioners valuing the Bayesian paradigm as matching that of scientific discovery. In addition, computational advances have allowed for more complex models to be fitted routinely to realistic data sets. Through examples, exercises and a combination of introd

  9. Bayesian methods for proteomic biomarker development

    Directory of Open Access Journals (Sweden)

    Belinda Hernández

    2015-12-01

    In this review we provide an introduction to Bayesian inference and demonstrate some of the advantages of using a Bayesian framework. We summarize how Bayesian methods have been used previously in proteomics and other areas of bioinformatics. Finally, we describe some popular and emerging Bayesian models from the statistical literature and provide a worked tutorial including code snippets to show how these methods may be applied for the evaluation of proteomic biomarkers.

  10. Pattern recognition and data mining software based on artificial neural networks applied to proton transfer in aqueous environments

    International Nuclear Information System (INIS)

    In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing classification of the proton motion into two categories: transfer ‘occurred’ and transfer ‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In this paper, we use a new developed data mining and pattern recognition tool for automating, controlling, and drawing charts of the output data of an Empirical Valence Bond existing code. The study analyzes the need for pattern recognition in aqueous proton transfer processes and how the learning approach in error back propagation (multilayer perceptron algorithms) could be satisfactorily employed in the present case. We present a tool for pattern recognition and validate the code including a real physical case study. The results of applying the artificial neural networks methodology to crowd patterns based upon selected physical properties (e.g., temperature, density) show the abilities of the network to learn proton transfer patterns corresponding to properties of the aqueous environments, which is in turn proved to be fully compatible with previous proton transfer studies. (condensed matter: structural, mechanical, and thermal properties)

  11. Bayesian statistics

    OpenAIRE

    Draper, D.

    2001-01-01

    © 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography

  12. Mindfulness training applied to addiction therapy: insights into the neural mechanisms of positive behavioral change

    Directory of Open Access Journals (Sweden)

    Garl

    2016-07-01

    Full Text Available Eric L Garland,1,2 Matthew O Howard,3 Sarah E Priddy,1 Patrick A McConnell,4 Michael R Riquino,1 Brett Froeliger4 1College of Social Work, 2Hunstsman Cancer Institute, University of Utah, Salt Lake City, UT, USA; 3School of Social Work, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; 4Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USA Abstract: Dual-process models from neuroscience suggest that addiction is driven by dysregulated interactions between bottom-up neural processes underpinning reward learning and top-down neural functions subserving executive function. Over time, drug use causes atrophy in prefrontally mediated cognitive control networks and hijacks striatal circuits devoted to processing natural rewards in service of compulsive seeking of drug-related reward. In essence, mindfulness-based interventions (MBIs can be conceptualized as mental training programs for exercising, strengthening, and remediating these functional brain networks. This review describes how MBIs may remediate addiction by regulating frontostriatal circuits, thereby restoring an adaptive balance between these top-down and bottom-up processes. Empirical evidence is presented suggesting that MBIs facilitate cognitive control over drug-related automaticity, attentional bias, and drug cue reactivity, while enhancing responsiveness to natural rewards. Findings from the literature are incorporated into an integrative account of the neural mechanisms of mindfulness-based therapies for effecting positive behavior change in the context of addiction recovery. Implications of our theoretical framework are presented with respect to how these insights can inform the addiction therapy process. Keywords: mindfulness, frontostriatal, savoring, cue reactivity, hedonic dysregulation, reward, addiction

  13. Imaging regenerating bone tissue based on neural networks applied to micro-diffraction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Campi, G.; Pezzotti, G. [Institute of Crystallography, CNR, via Salaria Km 29.300, I-00015, Monterotondo Roma (Italy); Fratini, M. [Centro Fermi -Museo Storico della Fisica e Centro Studi e Ricerche ' Enrico Fermi' , Roma (Italy); Ricci, A. [Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, D-22607 Hamburg (Germany); Burghammer, M. [European Synchrotron Radiation Facility, B. P. 220, F-38043 Grenoble Cedex (France); Cancedda, R.; Mastrogiacomo, M. [Istituto Nazionale per la Ricerca sul Cancro, and Dipartimento di Medicina Sperimentale dell' Università di Genova and AUO San Martino Istituto Nazionale per la Ricerca sul Cancro, Largo R. Benzi 10, 16132, Genova (Italy); Bukreeva, I.; Cedola, A. [Institute for Chemical and Physical Process, CNR, c/o Physics Dep. at Sapienza University, P-le A. Moro 5, 00185, Roma (Italy)

    2013-12-16

    We monitored bone regeneration in a tissue engineering approach. To visualize and understand the structural evolution, the samples have been measured by X-ray micro-diffraction. We find that bone tissue regeneration proceeds through a multi-step mechanism, each step providing a specific diffraction signal. The large amount of data have been classified according to their structure and associated to the process they came from combining Neural Networks algorithms with least square pattern analysis. In this way, we obtain spatial maps of the different components of the tissues visualizing the complex kinetic at the base of the bone regeneration.

  14. Subjective Bayesian Beliefs

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten I.;

    2015-01-01

    A large literature suggests that many individuals do not apply Bayes’ Rule when making decisions that depend on them correctly pooling prior information and sample data. We replicate and extend a classic experimental study of Bayesian updating from psychology, employing the methods of experimenta...

  15. Artificial neural networks applied in the spectrometry of a 239Pu-Be source

    International Nuclear Information System (INIS)

    To explore the potential use of a neutron source and to define the procedure to handle it under safety conditions, features like neutron spectrum and the ambient dose equivalent of the source must be known. The aim of this work was to determine the spectrum, the total fluence rate and the ambient dose equivalent of a 185 GBq 239Pu-Be neutron source. Using Monte Carlo methods the spectrum, the total fluence rate, and the ambient dose equivalent of a 239Pu-Be were calculated. The spectrum was calculated at 50, 100, 200 and 300 cm from the source in air using MCNP X and MCNP 4C codes. The neutron spectrum was also obtained, at 100 cm, using a Bonner sphere spectrometer whose count rates were used to unfold the neutron spectrum, the unfolding was carried out using an Artificial Neural Network for neutron spectrometry. With the spectrum, the total neutron fluence and the ambient dose equivalent were determined. Calculated results were compared with measured values where Monte Carlo results were smaller than those measured. These differences were attributed to the presence of 241Pu during the source manufacturing. In order to match calculated and measured quantities a 0.102 w/o of 241Pu was estimated. After corrections the differences between calculated and experimental results were 1%. This result shows the advantages of using Artificial Neural Networks technology in the unfolding of neutron spectrum using as a single piece of information the count rates of a Bonner sphere spectrometer. (author)

  16. Bayesian modeling using WinBUGS

    CERN Document Server

    Ntzoufras, Ioannis

    2009-01-01

    A hands-on introduction to the principles of Bayesian modeling using WinBUGS Bayesian Modeling Using WinBUGS provides an easily accessible introduction to the use of WinBUGS programming techniques in a variety of Bayesian modeling settings. The author provides an accessible treatment of the topic, offering readers a smooth introduction to the principles of Bayesian modeling with detailed guidance on the practical implementation of key principles. The book begins with a basic introduction to Bayesian inference and the WinBUGS software and goes on to cover key topics, including: Markov Chain Monte Carlo algorithms in Bayesian inference Generalized linear models Bayesian hierarchical models Predictive distribution and model checking Bayesian model and variable evaluation Computational notes and screen captures illustrate the use of both WinBUGS as well as R software to apply the discussed techniques. Exercises at the end of each chapter allow readers to test their understanding of the presented concepts and all ...

  17. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Directory of Open Access Journals (Sweden)

    Saro Lee

    2016-02-01

    Full Text Available The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS. These factors were analysed using artificial neural network (ANN and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50% and a test set (50%. A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10% was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%. Of the weights used in the artificial neural network model, ‘slope’ yielded the highest weight value (1.330, and ‘aspect’ yielded the lowest value (1.000. This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  18. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    Science.gov (United States)

    Saro, Lee; Woo, Jeon Seong; Kwan-Young, Oh; Moung-Jin, Lee

    2016-02-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering forest, geophysical, soil and topographic data, was built on the study area using the Geographical Information System (GIS). These factors were analysed using artificial neural network (ANN) and logistic regression models to generate a landslide susceptibility map. The study validates the landslide susceptibility map by comparing them with landslide occurrence areas. The locations of landslide occurrence were divided randomly into a training set (50%) and a test set (50%). A training set analyse the landslide susceptibility map using the artificial network along with logistic regression models, and a test set was retained to validate the prediction map. The validation results revealed that the artificial neural network model (with an accuracy of 80.10%) was better at predicting landslides than the logistic regression model (with an accuracy of 77.05%). Of the weights used in the artificial neural network model, `slope' yielded the highest weight value (1.330), and `aspect' yielded the lowest value (1.000). This research applied two statistical analysis methods in a GIS and compared their results. Based on the findings, we were able to derive a more effective method for analyzing landslide susceptibility.

  19. Experience of a Neural Network Imitator Applied to Diagnosis of Pre-pathological Conditions in Humans

    International Nuclear Information System (INIS)

    The Governmental Resolution of the RK 'Program of Medical Rehabilitation for People Influenced by Nuclear Tests at STS in 1949-1990' was published in March 1997. Implementation of the program requires first of all to create the effective methods of operative diagnostics of arid zones' population. To our mind, for this aims systems analysis with elements of neural network classification is more effective. We demonstrate such an approach using the example of the modem diagnostics system creating to detect the pre-pathological states among population by express analysis and personal particulars. The following considerations were used in the base of the training set: 1) any formalism must be based oneself upon wealth of phenomenology (experience, intuition, the presence of symptoms); 2) typical attributes of disease can be divided on 2 groups - subjective and objective. The common state of patient is characterised by the first group and it can have no intercommunication with disease. The second one is obtained by laboratory inspection and it is not connected with patient sensations. Each of the objective at-tributes can be the attribute of several illnesses at once. In this case both the subjective and objective features must be used together; 3) acceptability of any scheme can be substantiated only statistically. The question about justifiability and sufficiency of training set always demands separate discussion. Personal particulars are more available for creating training set. The set must be professionally oriented in order to reduce of selection effects. For our experiment the fully-connected neural network ( computer software, imitating the work of neural computer) 'Multi Neuron' was chosen. Feature space using for the net work was created from the 206 personal particulars. The research aimed to determine pre-pathological states of the urinary system organs among industrial, office and professional workers in the mining industry connected with phosphorus

  20. Bayesian Adaptive Exploration

    CERN Document Server

    Loredo, T J

    2004-01-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation--Inference--Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data--measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object--show the approach can significantly improve observational eff...

  1. Neural networks applied to determine the thermophysical properties of amino acid based ionic liquids.

    Science.gov (United States)

    Cancilla, John C; Perez, Ana; Wierzchoś, Kacper; Torrecilla, José S

    2016-03-01

    A series of models based on artificial neural networks (ANNs) have been designed to estimate the thermophysical properties of different amino acid-based ionic liquids (AAILs). Three different databases of AAILs were modeled using these algorithms with the goal set to estimate the density, viscosity, refractive index, ionic conductivity, and thermal expansion coefficient, and requiring only data regarding temperature and electronic polarizability of the chemicals. Additionally, a global model was designed combining all of the databases to determine the robustness of the method. In general, the results were successful, reaching mean prediction errors below 1% in many cases, as well as a statistically reliable and accurate global model. Attaining these successful models is a relevant fact as AAILs are novel biodegradable and biocompatible compounds which may soon make their way into the health sector forming a part of useful biomedical applications. Therefore, understanding the behavior and being able to estimate their thermophysical properties becomes crucial. PMID:26899458

  2. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    International Nuclear Information System (INIS)

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  3. Neutron spectrometry and dosimetry by means of Bonner spheres system and artificial neural networks applying robust design of artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Martinez B, M.R.; Ortiz R, J.M.; Vega C, H.R. [UAZ, Av. Ramon Lopez Velarde No. 801, 98000 Zacatecas (Mexico)

    2006-07-01

    An Artificial Neural Network has been designed, trained and tested to unfold neutron spectra and simultaneously to calculate equivalent doses. A set of 187 neutron spectra compiled by the International Atomic Energy Agency and 13 equivalent doses were used in the artificial neural network designed, trained and tested. In order to design the neural network was used the robust design of artificial neural networks methodology, which assures that the quality of the neural networks takes into account from the design stage. Unless previous works, here, for first time a group of neural networks were designed and trained to unfold 187 neutron spectra and at the same time to calculate 13 equivalent doses, starting from the count rates coming from the Bonner spheres system by using a systematic and experimental strategy. (Author)

  4. Towards Distributed Bayesian Estimation A Short Note on Selected Aspects

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    Prague: Institute of Information Theory and Automation, 2011, s. 67-72. ISBN 978-80-903834-6-3. [The 2nd International Workshop od Decision Making with Multiple Imperfect Decision Makers. Held in Conjunction with the 25th Annual Conference on Neural Information Processing Systems (NIPS 2011). Sierra Nevada (ES), 16.12.2011-16.12.2011] R&D Projects: GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient estimation * a linear or nonlinear model * distributed estimation * Bayesian decision making Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-towards distributed bayesian estimation a short note on selected aspects.pdf

  5. Artificial neural network analysis applied to simplifying bioeffect radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Full text: A bioeffect planning system has been developed by Wigg and Nicholls in the Departments of Clinical Radiobiology and Medical Physics, at the Royal Adelaide Hospital. The system has been developed to be an experimental tool by means of which bioeffect plans may be compared with conventional isodose plans in radiotherapy. Limitations of isodose planning, in many common clinical circumstances, have been apparent for sometime (Wigg and Wilson, Australasian Radiology, 1981, 25: 205-212). There are many reasons why bioeffect planning has been slow in developing. These include concerns about the clinical application of theoretical radiobiology models, the uncertainty of normal tissue and tumour parameter values, and the non-availability of suitable computer systems capable of performing bioeffect planning. These concerns are fully justified and isodose planning must remain, for the foreseeable future, the gold standard for clinical treatment. However, these concerns must be judged against the certainty that isodose planning, in which the only variable usually considered is the total dose, can be substantially misleading. Unfortunately, a typical Tumour Control Probability (TCP) equation for bioeffect planning is complex with 12 parameters. Consequently, the equation is difficult to implement in practice. Can the equation be simplified by ignoring the variability of some of the parameters? To test this possibility, we have attempted a neural network analysis of the problem. The capability of artificial neural network (ANN) analysis to solve classification problems was explored in which a weight space analysis was conducted. It led to the reduction of the number of parameters. The training data for the ANN analysis was generated using the above equation and practical data from many publications. The performance of the optimized ANN and the reduced-parameter ANN were tested using other treatment data. The optimized ANN results closely matched with those of the

  6. Artificial neural networks applied for soil class prediction in mountainous landscape of the Serra do Mar¹

    Directory of Open Access Journals (Sweden)

    Braz Calderano Filho

    2014-12-01

    Full Text Available Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+ sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13. Excluding the variable profile curvature (set 12, overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.

  7. Artificial neural networks applied to DNBR calculation in digital core protection systems

    International Nuclear Information System (INIS)

    The nuclear power plant has to be operated with sufficient margin from the specified DNBR limit for assuring its safety. The digital core protection system calculates on-line real-time DNBR by using a complex subchannel analysis program, and triggers a reliable reactor shutdown if the calculated DNBR approaches the specified limit. However, it takes relatively long calculation time even for a steady state condition, which may have an adverse effect on the operation flexibility. To overcome the drawback, a method using artificial neural networks is studied in this paper. Nonparametric training approach is utilized, which shows dramatic reduction of the training time, no tedious heuristic process for optimizing parameters, and no local minima problem during the training. The test results show that the predicted DNBR is within about ±2% deviation from the target DNBR for the fixed axial flux shape case. For the variable axial flux case including severely skewed shapes appeared during accidents, the deviation is about ±10∼15%. The suggested method could be the alternative that can calculate DNBR very quickly while increasing the plant availability

  8. Connectivity strategies for higher-order neural networks applied to pattern recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1990-01-01

    Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.

  9. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  10. Artificial neural networks applied to the prediction of spot prices in the market of electric energy

    International Nuclear Information System (INIS)

    The commercialization of electricity in Brazil as well as in the world has undergone several changes over the past 20 years. In order to achieve an economic balance between supply and demand of the good called electricity, stakeholders in this market follow both rules set by society (government, companies and consumers) and set by the laws of nature (hydrology). To deal with such complex issues, various studies have been conducted in the area of computational heuristics. This work aims to develop a software to forecast spot market prices in using artificial neural networks (ANN). ANNs are widely used in various applications especially in computational heuristics, where non-linear systems have computational challenges difficult to overcome because of the effect named 'curse of dimensionality'. This effect is due to the fact that the current computational power is not enough to handle problems with such a high combination of variables. The challenge of forecasting prices depends on factors such as: (a) foresee the demand evolution (electric load); (b) the forecast of supply (reservoirs, hydrology and climate), capacity factor; and (c) the balance of the economy (pricing, auctions, foreign markets influence, economic policy, government budget and government policy). These factors are considered be used in the forecasting model for spot market prices and the results of its effectiveness are tested and huge presented. (author)

  11. Classification by a neural network approach applied to non destructive testing

    International Nuclear Information System (INIS)

    Radiography is used by EDF for pipe inspection in nuclear power plants in order to detect defects. The radiographs obtained are then digitized in a well-defined protocol. The aim of EDF consists of developing a non destructive testing system for recognizing defects. In this paper, we describe the recognition procedure of areas with defects. We first present the digitization protocol, specifies the poor quality of images under study and propose a procedure to enhance defects. We then examine the problem raised by the choice of good features for classification. After having proved that statistical or standard textural features such as homogeneity, entropy or contrast are not relevant, we develop a geometrical-statistical approach based on the cooperation between signal correlations study and regional extrema analysis. The principle consists of analysing and comparing for areas with defects and without any defect, the evolution of conditional probabilities matrices for increasing neighborhood sizes, the shape of variograms and the location of regional minima. We demonstrate that anisotropy and surface of series of 'comet tails' associated with probability matrices, variograms slope and statistical indices, regional extrema location, are features able to discriminate areas with defects from areas without any. The classification is then realized by a neural network, which structure, properties and learning mechanisms are detailed. Finally we discuss the results. (authors). 21 refs., 5 figs

  12. 基于Tabu搜索的贝叶斯网络在烟叶香型评价中的应用%APPLYING TABU SEARCH-BASED BAYESIAN NETWORK IN APPRAISING AROMA TYPES OF TOBACCO

    Institute of Scientific and Technical Information of China (English)

    李丽华; 丁香乾; 贺英; 王伟

    2012-01-01

    The appraisal of aroma types of tobacco usually depends on olfaction, the veracity of its result is sometimes hard to be guaranteed. In view of this, sensory evaluation models have been constructed at home and abroad by using BP neural network or other methods, but they are inefficient in recognition. According to the relationship between chemical composition and the aroma types of tobacco, the recognition model of tobacco aroma types has been constructed by using Tabu search-based Bayesian network. Experimental results showed that it can attain a better Bayesian network structure, and has higher training efficiency and better accuracy in classification compared with BP neural network or other methods.%烟叶香型通常是靠人的嗅觉评定的,评定结果的准确性往往难以保证.针对该问题,国内外建立了BP神经网络等感官评估模型,但识别效率不高.根据烟叶中化学成分与烟叶香型关系,使用基于Tabu搜索的贝叶斯网络建立烟叶香型识别模型.实验结果表明,使用该方法能得到较好的贝叶斯网络结构,与BP神经网络等方法相比训练效率更高,分类的结果也更加准确.

  13. Plug & Play object oriented Bayesian networks

    DEFF Research Database (Denmark)

    Bangsø, Olav; Flores, J.; Jensen, Finn Verner

    2003-01-01

    Object oriented Bayesian networks have proven themselves useful in recent years. The idea of applying an object oriented approach to Bayesian networks has extended their scope to larger domains that can be divided into autonomous but interrelated entities. Object oriented Bayesian networks have...... been shown to be quite suitable for dynamic domains as well. However, processing object oriented Bayesian networks in practice does not take advantage of their modular structure. Normally the object oriented Bayesian network is transformed into a Bayesian network and, inference is performed...... by constructing a junction tree from this network. In this paper we propose a method for translating directly from object oriented Bayesian networks to junction trees, avoiding the intermediate translation. We pursue two main purposes: firstly, to maintain the original structure organized in an instance tree...

  14. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

     Probabilistic networks, also known as Bayesian networks and influence diagrams, have become one of the most promising technologies in the area of applied artificial intelligence, offering intuitive, efficient, and reliable methods for diagnosis, prediction, decision making, classification......, troubleshooting, and data mining under uncertainty. Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. Intended...

  15. Bayesian Adaptive Exploration

    Science.gov (United States)

    Loredo, Thomas J.

    2004-04-01

    I describe a framework for adaptive scientific exploration based on iterating an Observation-Inference-Design cycle that allows adjustment of hypotheses and observing protocols in response to the results of observation on-the-fly, as data are gathered. The framework uses a unified Bayesian methodology for the inference and design stages: Bayesian inference to quantify what we have learned from the available data and predict future data, and Bayesian decision theory to identify which new observations would teach us the most. When the goal of the experiment is simply to make inferences, the framework identifies a computationally efficient iterative ``maximum entropy sampling'' strategy as the optimal strategy in settings where the noise statistics are independent of signal properties. Results of applying the method to two ``toy'' problems with simulated data-measuring the orbit of an extrasolar planet, and locating a hidden one-dimensional object-show the approach can significantly improve observational efficiency in settings that have well-defined nonlinear models. I conclude with a list of open issues that must be addressed to make Bayesian adaptive exploration a practical and reliable tool for optimizing scientific exploration.

  16. Approximate Bayesian recursive estimation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2014-01-01

    Roč. 285, č. 1 (2014), s. 100-111. ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf

  17. Bayesian Generalized Rating Curves

    OpenAIRE

    Helgi Sigurðarson 1985

    2014-01-01

    A rating curve is a curve or a model that describes the relationship between water elevation, or stage, and discharge in an observation site in a river. The rating curve is fit from paired observations of stage and discharge. The rating curve then predicts discharge given observations of stage and this methodology is applied as stage is substantially easier to directly observe than discharge. In this thesis a statistical rating curve model is proposed working within the framework of Bayesian...

  18. Bayesian Monitoring.

    OpenAIRE

    Kirstein, Roland

    2005-01-01

    This paper presents a modification of the inspection game: The ?Bayesian Monitoring? model rests on the assumption that judges are interested in enforcing compliant behavior and making correct decisions. They may base their judgements on an informative but imperfect signal which can be generated costlessly. In the original inspection game, monitoring is costly and generates a perfectly informative signal. While the inspection game has only one mixed strategy equilibrium, three Perfect Bayesia...

  19. Tissue heterogeneity as a mechanism for localized neural stimulation by applied electric fields

    International Nuclear Information System (INIS)

    We investigate the heterogeneity of electrical conductivity as a new mechanism to stimulate excitable tissues via applied electric fields. In particular, we show that stimulation of axons crossing internal boundaries can occur at boundaries where the electric conductivity of the volume conductor changes abruptly. The effectiveness of this and other stimulation mechanisms was compared by means of models and computer simulations in the context of transcranial magnetic stimulation. While, for a given stimulation intensity, the largest membrane depolarization occurred where an axon terminates or bends sharply in a high electric field region, a slightly smaller membrane depolarization, still sufficient to generate action potentials, also occurred at an internal boundary where the conductivity jumped from 0.143 S m-1 to 0.333 S m-1, simulating a white-matter-grey-matter interface. Tissue heterogeneity can also give rise to local electric field gradients that are considerably stronger and more focal than those impressed by the stimulation coil and that can affect the membrane potential, albeit to a lesser extent than the two mechanisms mentioned above. Tissue heterogeneity may play an important role in electric and magnetic 'far-field' stimulation

  20. Direct and inverse neural networks modelling applied to study the influence of the gas diffusion layer properties on PBI-based PEM fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Lobato, Justo; Canizares, Pablo; Rodrigo, Manuel A.; Linares, Jose J. [Chemical Engineering Department, University of Castilla-La Mancha, Campus Universitario s/n, 13004 Ciudad Real (Spain); Piuleac, Ciprian-George; Curteanu, Silvia [Faculty of Chemical Engineering and Environmental Protection, Department of Chemical Engineering, ' ' Gh. Asachi' ' Technical University Iasi Bd. D. Mangeron, No. 71A, 700050 IASI (Romania)

    2010-08-15

    This article shows the application of a very useful mathematical tool, artificial neural networks, to predict the fuel cells results (the value of the tortuosity and the cell voltage, at a given current density, and therefore, the power) on the basis of several properties that define a Gas Diffusion Layer: Teflon content, air permeability, porosity, mean pore size, hydrophobia level. Four neural networks types (multilayer perceptron, generalized feedforward network, modular neural network, and Jordan-Elman neural network) have been applied, with a good fitting between the predicted and the experimental values in the polarization curves. A simple feedforward neural network with one hidden layer proved to be an accurate model with good generalization capability (error about 1% in the validation phase). A procedure based on inverse neural network modelling was able to determine, with small errors, the initial conditions leading to imposed values for characteristics of the fuel cell. In addition, the use of this tool has been proved to be very attractive in order to predict the cell performance, and more interestingly, the influence of the properties of the gas diffusion layer on the cell performance, allowing possible enhancements of this material by changing some of its properties. (author)

  1. Bayesian meta-analysis of test accuracy in the absence of a perfect reference test applied to bone scintigraphy for the diagnosis of complex regional pain syndrome.

    Science.gov (United States)

    Held, Ulrike; Brunner, Florian; Steurer, Johann; Wertli, Maria M

    2015-11-01

    There is conflicting evidence about the accuracy of bone scintigraphy (BS) for the diagnosis of complex regional pain syndrome 1 (CRPS 1). In a meta-analysis of diagnostic studies, the evaluation of test accuracy is impeded by the use of different imperfect reference tests. The aim of our study is to summarize sensitivity and specificity of BS for CRPS 1 and to identify factors to explain heterogeneity. We use a hierarchical Bayesian approach to model test accuracy and threshold, and we present different models accounting for the imperfect nature of the reference tests, and assuming conditional dependence between BS and the reference test results. Further, we include disease duration as explanatory variable in the model. The models are compared using summary ROC curves and the deviance information criterion (DIC). Our results show that those models which account for different imperfect reference tests with conditional dependence and inclusion of the covariate are the ones with the smallest DIC. The sensitivity of BS was 0.87 (95% credible interval 0.73-0.97) and the overall specificity was 0.87 (0.73-0.95) in the model with the smallest DIC, in which missing values of the covariate are imputed within the Bayesian framework. The estimated effect of duration of symptoms on the threshold parameter was 0.17 (-0.25 to 0.57). We demonstrate that the Bayesian models presented in this paper are useful to address typical problems occurring in meta-analysis of diagnostic studies, including conditional dependence between index test and reference test, as well as missing values in the study-specific covariates. PMID:26479506

  2. Bayesian programming

    CERN Document Server

    Bessiere, Pierre; Ahuactzin, Juan Manuel; Mekhnacha, Kamel

    2013-01-01

    Probability as an Alternative to Boolean LogicWhile logic is the mathematical foundation of rational reasoning and the fundamental principle of computing, it is restricted to problems where information is both complete and certain. However, many real-world problems, from financial investments to email filtering, are incomplete or uncertain in nature. Probability theory and Bayesian computing together provide an alternative framework to deal with incomplete and uncertain data. Decision-Making Tools and Methods for Incomplete and Uncertain DataEmphasizing probability as an alternative to Boolean

  3. Neural network design with combined backpropagation and creeping random search learning algorithms applied to the determination of retained austenite in TRIP steels

    International Nuclear Information System (INIS)

    At the beginning of the decade of the nineties, the industrial interest for TRIP steels leads to a significant increase of the investigation and application in this field. In this work, the flexibility of neural networks for the modelling of complex properties is used to tackle the problem of determining the retained austenite content in TRIP-steel. Applying a combination of two learning algorithms (backpropagation and creeping-random-search) for the neural network, a model has been created that enables the prediction of retained austenite in low-Si / low-Al multiphase steels as a function of processing parameters. (Author). 34 refs.

  4. 12th Brazilian Meeting on Bayesian Statistics

    CERN Document Server

    Louzada, Francisco; Rifo, Laura; Stern, Julio; Lauretto, Marcelo

    2015-01-01

    Through refereed papers, this volume focuses on the foundations of the Bayesian paradigm; their comparison to objectivistic or frequentist Statistics counterparts; and the appropriate application of Bayesian foundations. This research in Bayesian Statistics is applicable to data analysis in biostatistics, clinical trials, law, engineering, and the social sciences. EBEB, the Brazilian Meeting on Bayesian Statistics, is held every two years by the ISBrA, the International Society for Bayesian Analysis, one of the most active chapters of the ISBA. The 12th meeting took place March 10-14, 2014 in Atibaia. Interest in foundations of inductive Statistics has grown recently in accordance with the increasing availability of Bayesian methodological alternatives. Scientists need to deal with the ever more difficult choice of the optimal method to apply to their problem. This volume shows how Bayes can be the answer. The examination and discussion on the foundations work towards the goal of proper application of Bayesia...

  5. Bayesian Statistics for Biological Data: Pedigree Analysis

    Science.gov (United States)

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  6. BAMBI: blind accelerated multimodal Bayesian inference

    CERN Document Server

    Graff, Philip; Hobson, Michael P; Lasenby, Anthony

    2011-01-01

    In this paper we present an algorithm for rapid Bayesian analysis that combines the benefits of nested sampling and artificial neural networks. The blind accelerated multimodal Bayesian inference (BAMBI) algorithm implements the MultiNest package for nested sampling as well as the training of an artificial neural network (NN) to learn the likelihood function. In the case of computationally expensive likelihoods, this allows the substitution of a much more rapid approximation in order to increase significantly the speed of the analysis. We begin by demonstrating, with a few toy examples, the ability of a NN to learn complicated likelihood surfaces. BAMBI's ability to decrease running time for Bayesian inference is then demonstrated in the context of estimating cosmological parameters from WMAP and other observations. We show that valuable speed increases are achieved in addition to obtaining NNs trained on the likelihood functions for the different model and data combinations. These NNs can then be used for an...

  7. Bayesian theory and applications

    CERN Document Server

    Dellaportas, Petros; Polson, Nicholas G; Stephens, David A

    2013-01-01

    The development of hierarchical models and Markov chain Monte Carlo (MCMC) techniques forms one of the most profound advances in Bayesian analysis since the 1970s and provides the basis for advances in virtually all areas of applied and theoretical Bayesian statistics. This volume guides the reader along a statistical journey that begins with the basic structure of Bayesian theory, and then provides details on most of the past and present advances in this field. The book has a unique format. There is an explanatory chapter devoted to each conceptual advance followed by journal-style chapters that provide applications or further advances on the concept. Thus, the volume is both a textbook and a compendium of papers covering a vast range of topics. It is appropriate for a well-informed novice interested in understanding the basic approach, methods and recent applications. Because of its advanced chapters and recent work, it is also appropriate for a more mature reader interested in recent applications and devel...

  8. A Gentle Introduction to Bayesian Analysis : Applications to Developmental Research

    NARCIS (Netherlands)

    Van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B.; Neyer, Franz J.; van Aken, Marcel A G

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, t

  9. A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri

    2013-01-01

    representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...

  10. 朴素贝叶斯应用于自动化测试用例生成%Naive Bayesian Applied in Automatic Test Cases Generation

    Institute of Scientific and Technical Information of China (English)

    李欣; 张聪; 罗宪

    2012-01-01

    提出一种使用朴素贝叶斯作为核心算法来产生自动化测试用例的方法。该方法以实现自动化测试为目标,引入了朴素贝叶斯对产生的随机测试用例分类的思想。实验结果表明,这是一种可行的生成测试用例的方法。%Test cases generation was the key of automatic testing. Test cases generated great significance in software testing process. Automatic testing cases generated by as the core algorithm were presented in this paper. And the thoughts of classificatio in test case generation. The results showed the method presented in this paper was to generate test cases. effectively had Bayesian methods n were introduced a feasible method

  11. Applying Bayesian parameter estimation to relativistic heavy-ion collisions: Simultaneous characterization of the initial state and quark-gluon plasma medium

    Science.gov (United States)

    Bernhard, Jonah E.; Moreland, J. Scott; Bass, Steffen A.; Liu, Jia; Heinz, Ulrich

    2016-08-01

    We quantitatively estimate properties of the quark-gluon plasma created in ultrarelativistic heavy-ion collisions utilizing Bayesian statistics and a multiparameter model-to-data comparison. The study is performed using a recently developed parametric initial condition model, TRENTo, which interpolates among a general class of particle production schemes, and a modern hybrid model which couples viscous hydrodynamics to a hadronic cascade. We calibrate the model to multiplicity, transverse momentum, and flow data and report constraints on the parametrized initial conditions and the temperature-dependent transport coefficients of the quark-gluon plasma. We show that initial entropy deposition is consistent with a saturation-based picture, extract a relation between the minimum value and slope of the temperature-dependent specific shear viscosity, and find a clear signal for a nonzero bulk viscosity.

  12. Applying Bayesian parameter estimation to relativistic heavy-ion collisions: simultaneous characterization of the initial state and quark-gluon plasma medium

    CERN Document Server

    Bernhard, Jonah E; Bass, Steffen A; Liu, Jia; Heinz, Ulrich

    2016-01-01

    We quantitatively estimate properties of the quark-gluon plasma created in ultra-relativistic heavy-ion collisions utilizing Bayesian statistics and a multi-parameter model-to-data comparison. The study is performed using a recently developed parametric initial condition model, TRENTO, which interpolates among a general class of particle production schemes, and a modern hybrid model which couples viscous hydrodynamics to a hadronic cascade. We calibrate the model to multiplicity, transverse momentum, and flow data and report constraints on the parametrized initial conditions and the temperature-dependent transport coefficients of the quark-gluon plasma. We show that initial entropy deposition is consistent with a saturation-based picture, extract a relation between the minimum value and slope of the temperature-dependent specific shear viscosity, and find a clear signal for a nonzero bulk viscosity.

  13. A Galvanotaxis Assay for Analysis of Neural Precursor Cell Migration Kinetics in an Externally Applied Direct Current Electric Field

    OpenAIRE

    Babona-Pilipos, Robart; Popovic, Milos R.; Morshead, Cindi M.

    2012-01-01

    The discovery of neural stem and progenitor cells (collectively termed neural precursor cells) (NPCs) in the adult mammalian brain has led to a body of research aimed at utilizing the multipotent and proliferative properties of these cells for the development of neuroregenerative strategies. A critical step for the success of such strategies is the mobilization of NPCs toward a lesion site following exogenous transplantation or to enhance the response of the endogenous precursors that are fou...

  14. Pattern recognition and data mining software based on artificial neural networks applied to proton transfer in aqueous environments

    OpenAIRE

    Tahat, Amani; Martí Rabassa, Jordi; Khwaldeh, Ali; Tahat, Kaher

    2014-01-01

    In computational physics proton transfer phenomena could be viewed as pattern classification problems based on a set of input features allowing to classify the proton motion into two categories: transfer‘occurred’and transfer‘not occurred’. The goal of this paper is to evaluate the use of artificial neural networks in the classification of proton transfer events, based on the feed-forward back propagation neural network, used as a classifier to distinguish between the two transfer cases. In t...

  15. Decentralized Distributed Bayesian Estimation

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    Praha: ÚTIA AVČR, v.v.i, 2011 - (Janžura, M.; Ivánek, J.). s. 16-16 [7th International Workshop on Data–Algorithms–Decision Making. 27.11.2011-29.11.2011, Mariánská] R&D Projects: GA ČR 102/08/0567; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : estimation * distributed estimation * model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/dedecius-decentralized distributed bayesian estimation.pdf

  16. Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy and Artificial Neural Networks Applied to Differentiate Escherichia coli papG+/papG- Strains

    OpenAIRE

    Łukasz Lechowicz; Wioletta Adamus-Białek; Wiesław Kaca

    2013-01-01

    Fimbriae are an important pathogenic factor of Escherichia coli during development of urinary tract infections. Here, we describe a new method for identification of Escherichia coli papG+ from papG- strains using the attenuated total reflectance Fourier transform infrared Spectroscopy (ATR FT-IR). We applied artificial neural networks to the analysis of the ATR FT-IR results. These methods allowed to discriminate E. coli papG+ from papG- strains with accuracy of 99%.

  17. Bayesian Analysis of Experimental Data

    Directory of Open Access Journals (Sweden)

    Lalmohan Bhar

    2013-10-01

    Full Text Available Analysis of experimental data from Bayesian point of view has been considered. Appropriate methodology has been developed for application into designed experiments. Normal-Gamma distribution has been considered for prior distribution. Developed methodology has been applied to real experimental data taken from long term fertilizer experiments.

  18. Fuzzy Naive Bayesian for constructing regulated network with weights.

    Science.gov (United States)

    Zhou, Xi Y; Tian, Xue W; Lim, Joon S

    2015-01-01

    In the data mining field, classification is a very crucial technology, and the Bayesian classifier has been one of the hotspots in classification research area. However, assumptions of Naive Bayesian and Tree Augmented Naive Bayesian (TAN) are unfair to attribute relations. Therefore, this paper proposes a new algorithm named Fuzzy Naive Bayesian (FNB) using neural network with weighted membership function (NEWFM) to extract regulated relations and weights. Then, we can use regulated relations and weights to construct a regulated network. Finally, we will classify the heart and Haberman datasets by the FNB network to compare with experiments of Naive Bayesian and TAN. The experiment results show that the FNB has a higher classification rate than Naive Bayesian and TAN. PMID:26405944

  19. Control of Complex Systems Using Bayesian Networks and Genetic Algorithm

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    A method based on Bayesian neural networks and genetic algorithm is proposed to control the fermentation process. The relationship between input and output variables is modelled using Bayesian neural network that is trained using hybrid Monte Carlo method. A feedback loop based on genetic algorithm is used to change input variables so that the output variables are as close to the desired target as possible without the loss of confidence level on the prediction that the neural network gives. The proposed procedure is found to reduce the distance between the desired target and measured outputs significantly.

  20. Tactile length contraction as Bayesian inference.

    Science.gov (United States)

    Tong, Jonathan; Ngo, Vy; Goldreich, Daniel

    2016-08-01

    To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process. PMID:27121574

  1. Kernel density compression for real-time Bayesian encoding/decoding of unsorted hippocampal spikes

    OpenAIRE

    Sodkomkham, Danaipat; Ciliberti, Davide; Wilson, Matthew A.; Fukui, Ken-ichi; Moriyama, Koichi; Numao, Masayuki; Kloosterman, Fabian

    2015-01-01

    To gain a better understanding of how neural ensembles communicate and process information, neural decoding algorithms are used to extract information encoded in their spiking activity. Bayesian decoding is one of the most used neural population decoding approaches to extract information from the ensemble spiking activity of rat hippocampal neurons. Recently it has been shown how Bayesian decoding can be implemented without the intermediate step of sorting spike waveforms into groups of singl...

  2. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes and...... largely due to the availability of efficient inference algorithms for answering probabilistic queries about the states of the variables in the network. Furthermore, to support the construction of Bayesian network models, learning algorithms are also available. We give an overview of the Bayesian network...

  3. Fuzzy ARTMAP neural network for seafloor classification from multibeam sonar data

    Institute of Scientific and Technical Information of China (English)

    Zhou Xinghua; Chen Yongqi; Nick Emerson; Du Dewen

    2006-01-01

    This paper presents a seafloor classification method of multibeam sonar data, based on the use of Adaptive Resonance Theory (ART) neural networks. A general ART-based neural network, Fuzzy ARTMAP, has been proposed for seafloor classification of multibeam sonar data. An evolutionary strategy was used to generate new training samples near the cluster boundaries of the neural network, therefore the weights can be revised and refined by supervised learning. The proposed method resolves the training problem for Fuzzy ARTMAP neural networks, which are applied to seafloor classification of multibeam sonar data when there are less than adequate ground-truth samples. The results were synthetically analyzed in comparison with the standard Fuzzy ARTMAP network and a conventional Bayesian classifier.The conclusion can be drawn that Fuzzy ARTMAP neural networks combining with GA algorithms can be alternative powerful tools for seafloor classification of multibeam sonar data.

  4. Using consensus bayesian network to model the reactive oxygen species regulatory pathway.

    Directory of Open Access Journals (Sweden)

    Liangdong Hu

    Full Text Available Bayesian network is one of the most successful graph models for representing the reactive oxygen species regulatory pathway. With the increasing number of microarray measurements, it is possible to construct the bayesian network from microarray data directly. Although large numbers of bayesian network learning algorithms have been developed, when applying them to learn bayesian networks from microarray data, the accuracies are low due to that the databases they used to learn bayesian networks contain too few microarray data. In this paper, we propose a consensus bayesian network which is constructed by combining bayesian networks from relevant literatures and bayesian networks learned from microarray data. It would have a higher accuracy than the bayesian networks learned from one database. In the experiment, we validated the bayesian network combination algorithm on several classic machine learning databases and used the consensus bayesian network to model the Escherichia coli's ROS pathway.

  5. Modeling operational risks of the nuclear industry with Bayesian networks

    International Nuclear Information System (INIS)

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  6. A Bayesian Nonparametric IRT Model

    OpenAIRE

    Karabatsos, George

    2015-01-01

    This paper introduces a flexible Bayesian nonparametric Item Response Theory (IRT) model, which applies to dichotomous or polytomous item responses, and which can apply to either unidimensional or multidimensional scaling. This is an infinite-mixture IRT model, with person ability and item difficulty parameters, and with a random intercept parameter that is assigned a mixing distribution, with mixing weights a probit function of other person and item parameters. As a result of its flexibility...

  7. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which...... represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid...... nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...

  8. Bayesian data analysis

    CERN Document Server

    Gelman, Andrew; Stern, Hal S; Dunson, David B; Vehtari, Aki; Rubin, Donald B

    2013-01-01

    FUNDAMENTALS OF BAYESIAN INFERENCEProbability and InferenceSingle-Parameter Models Introduction to Multiparameter Models Asymptotics and Connections to Non-Bayesian ApproachesHierarchical ModelsFUNDAMENTALS OF BAYESIAN DATA ANALYSISModel Checking Evaluating, Comparing, and Expanding ModelsModeling Accounting for Data Collection Decision AnalysisADVANCED COMPUTATION Introduction to Bayesian Computation Basics of Markov Chain Simulation Computationally Efficient Markov Chain Simulation Modal and Distributional ApproximationsREGRESSION MODELS Introduction to Regression Models Hierarchical Linear

  9. Bayesian Mediation Analysis

    OpenAIRE

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    This article proposes Bayesian analysis of mediation effects. Compared to conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian mediation analysis, inference is straightforward and exact, which makes it appealing for studies with small samples. Third, the Bayesian approach is conceptua...

  10. Bayesian Games with Intentions

    OpenAIRE

    Bjorndahl, Adam; Halpern, Joseph Y.; Pass, Rafael

    2016-01-01

    We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games, and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

  11. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    Science.gov (United States)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  12. Bayesian Soft Sensing in Cold Sheet Rolling

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Jirsa, Ladislav

    Praha: ÚTIA AV ČR, v.v.i, 2010. s. 45-45. [6th International Workshop on Data–Algorithms–Decision Making. 2.12.2010-4.12.2010, Jindřichův Hradec] R&D Projects: GA MŠk(CZ) 7D09008 Institutional research plan: CEZ:AV0Z10750506 Keywords : soft sensor * bayesian statistics * bayesian model averaging Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2010/AS/dedecius-bayesian soft sensing in cold sheet rolling.pdf

  13. Bayesian analysis of zero inflated spatiotemporal HIV/TB child mortality data through the INLA and SPDE approaches: Applied to data observed between 1992 and 2010 in rural North East South Africa

    Science.gov (United States)

    Musenge, Eustasius; Chirwa, Tobias Freeman; Kahn, Kathleen; Vounatsou, Penelope

    2013-06-01

    Longitudinal mortality data with few deaths usually have problems of zero-inflation. This paper presents and applies two Bayesian models which cater for zero-inflation, spatial and temporal random effects. To reduce the computational burden experienced when a large number of geo-locations are treated as a Gaussian field (GF) we transformed the field to a Gaussian Markov Random Fields (GMRF) by triangulation. We then modelled the spatial random effects using the Stochastic Partial Differential Equations (SPDEs). Inference was done using a computationally efficient alternative to Markov chain Monte Carlo (MCMC) called Integrated Nested Laplace Approximation (INLA) suited for GMRF. The models were applied to data from 71,057 children aged 0 to under 10 years from rural north-east South Africa living in 15,703 households over the years 1992-2010. We found protective effects on HIV/TB mortality due to greater birth weight, older age and more antenatal clinic visits during pregnancy (adjusted RR (95% CI)): 0.73(0.53;0.99), 0.18(0.14;0.22) and 0.96(0.94;0.97) respectively. Therefore childhood HIV/TB mortality could be reduced if mothers are better catered for during pregnancy as this can reduce mother-to-child transmissions and contribute to improved birth weights. The INLA and SPDE approaches are computationally good alternatives in modelling large multilevel spatiotemporal GMRF data structures.

  14. Flexible Bayesian Nonparametric Priors and Bayesian Computational Methods

    OpenAIRE

    Zhu, Weixuan

    2016-01-01

    The definition of vectors of dependent random probability measures is a topic of interest in Bayesian nonparametrics. They represent dependent nonparametric prior distributions that are useful for modelling observables for which specific covariate values are known. Our first contribution is the introduction of novel multivariate vectors of two-parameter Poisson-Dirichlet process. The dependence is induced by applying a L´evy copula to the marginal L´evy intensities. Our attenti...

  15. Bayesian Analysis of Multivariate Probit Models

    OpenAIRE

    Siddhartha Chib; Edward Greenberg

    1996-01-01

    This paper provides a unified simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods, and maximum likelihood estimates are obtained by a Markov chain Monte Carlo version of the E-M algorithm. Computation of Bayes factors from the simulation output is also considered. The methods are applied to a bivariate data set, to a 534-subject, four-year longitudinal dat...

  16. Bayesian Classification in Medicine: The Transferability Question *

    OpenAIRE

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann

    1981-01-01

    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  17. Bayesian target tracking based on particle filter

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.

  18. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  19. Bayesian inference on proportional elections.

    Science.gov (United States)

    Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio

    2015-01-01

    Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259

  20. Elements of Bayesian experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Sivia, D.S. [Rutherford Appleton Lab., Oxon (United Kingdom)

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  1. Topics in Nonparametric Bayesian Statistics

    OpenAIRE

    2003-01-01

    The intersection set of Bayesian and nonparametric statistics was almost empty until about 1973, but now seems to be growing at a healthy rate. This chapter gives an overview of various theoretical and applied research themes inside this field, partly complementing and extending recent reviews of Dey, Müller and Sinha (1998) and Walker, Damien, Laud and Smith (1999). The intention is not to be complete or exhaustive, but rather to touch on research areas of interest, partly by example.

  2. Bayesian Optimization for Adaptive MCMC

    OpenAIRE

    Mahendran, Nimalan; Wang, Ziyu; Hamze, Firas; De Freitas, Nando

    2011-01-01

    This paper proposes a new randomized strategy for adaptive MCMC using Bayesian optimization. This approach applies to non-differentiable objective functions and trades off exploration and exploitation to reduce the number of potentially costly objective function evaluations. We demonstrate the strategy in the complex setting of sampling from constrained, discrete and densely connected probabilistic graphical models where, for each variation of the problem, one needs to adjust the parameters o...

  3. Unsupervised Bayesian decomposition of multiunit EMG recordings using Tabu search.

    Science.gov (United States)

    Ge, Di; Le Carpentier, Eric; Farina, Dario

    2010-03-01

    Intramuscular electromyography (EMG) signals are usually decomposed with semiautomatic procedures that involve the interaction with an expert operator. In this paper, a Bayesian statistical model and a maximum a posteriori (MAP) estimator are used to solve the problem of multiunit EMG decomposition in a fully automatic way. The MAP estimation exploits both the likelihood of the reconstructed EMG signal and some physiological constraints, such as the discharge pattern regularity and the refractory period of muscle fibers, as prior information integrated in a Bayesian framework. A Tabu search is proposed to efficiently tackle the nondeterministic polynomial-time-hard problem of optimization w.r.t the motor unit discharge patterns. The method is fully automatic and was tested on simulated and experimental EMG signals. Compared with the semiautomatic decomposition performed by an expert operator, the proposed method resulted in an accuracy of 90.0% +/- 3.8% when decomposing single-channel intramuscular EMG signals recorded from the abductor digiti minimi muscle at contraction forces of 5% and 10% of the maximal force. The method can also be applied to the automatic identification and classification of spikes from other neural recordings. PMID:19457743

  4. Chaos theory applied to input space representation of autonomous neural network-based short-term load forecasting models Teoria do caos aplicada à definição do conjunto de entradas de modelos neurais autônomos para previsão de carga em curto prazo

    Directory of Open Access Journals (Sweden)

    Vitor Hugo Ferreira

    2011-12-01

    Full Text Available After 1991, the literature on load forecasting has been dominated by neural network based proposals. However, one major risk in using neural models is the possibility of excessive training, i.e., data overfitting. The extent of nonlinearity provided by neural network based load forecasters, which depends on the input space representation, has been adjusted using heuristic procedures. The empirical nature of these procedures makes their application cumbersome and time consuming. Autonomous modeling including automatic input selection and model complexity control has been proposed recently for short-term load forecasting. However, these techniques require the specification of an initial input set that will be processed by the model in order to select the most relevant variables. This paper explores chaos theory as a tool from non-linear time series analysis to automatic select the lags of the load series data that will be used by the neural models. In this paper, Bayesian inference applied to multi-layered perceptrons and relevance vector machines are used in the development of autonomous neural models.Após 1991, a literatura sobre previsão de carga passou a ser dominada por propostas baseadas em modelos neurais. Entretanto, um empecilho na aplicação destes modelos reside na possibilidade do ajuste excessivo dos dados, i.e, overfitting. O excesso de não-linearidade disponibilizado pelos modelos neurais de previsão de carga, que depende da representação do espaço de entrada, vem sendo ajustado de maneira heurística. Modelos autônomos incluindo técnicas automáticas e acopladas para seleção de entradas e controle de complexidade dos modelos foram propostos recentemente para previsão de carga em curto prazo. Entretanto, estas técnicas necessitam da especificação do conjunto inicial de entradas que será processado pelo modelo visando determinar aquelas mais relevantes. Este trabalho explora a teoria do caos como ferramenta de an

  5. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  6. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  7. The Bayesian Modelling Of Inflation Rate In Romania

    OpenAIRE

    Mihaela Simionescu

    2014-01-01

    Bayesian econometrics knew a considerable increase in popularity in the last years, joining the interests of various groups of researchers in economic sciences and additional ones as specialists in econometrics, commerce, industry, marketing, finance, micro-economy, macro-economy and other domains. The purpose of this research is to achieve an introduction in Bayesian approach applied in economics, starting with Bayes theorem. For the Bayesian linear regression models the methodology of estim...

  8. Algorithms and Complexity Results for Exact Bayesian Structure Learning

    OpenAIRE

    Sebastian Ordyniak; Stefan Szeider

    2012-01-01

    Bayesian structure learning is the NP-hard problem of discovering a Bayesian network that optimally represents a given set of training data. In this paper we study the computational worst-case complexity of exact Bayesian structure learning under graph theoretic restrictions on the super-structure. The super-structure (a concept introduced by Perrier, Imoto, and Miyano, JMLR 2008) is an undirected graph that contains as subgraphs the skeletons of solution networks. Our results apply to severa...

  9. Classification by a neural network approach applied to non destructive testing; Classification par approches connexionnistes appliquee au controle non destructif

    Energy Technology Data Exchange (ETDEWEB)

    Lefevre, M.; Preteux, F.; Lavayssiere, B.

    1995-12-31

    Radiography is used by EDF for pipe inspection in nuclear power plants in order to detect defects. The radiographs obtained are then digitized in a well-defined protocol. The aim of EDF consists of developing a non destructive testing system for recognizing defects. In this paper, we describe the recognition procedure of areas with defects. We first present the digitization protocol, specifies the poor quality of images under study and propose a procedure to enhance defects. We then examine the problem raised by the choice of good features for classification. After having proved that statistical or standard textural features such as homogeneity, entropy or contrast are not relevant, we develop a geometrical-statistical approach based on the cooperation between signal correlations study and regional extrema analysis. The principle consists of analysing and comparing for areas with defects and without any defect, the evolution of conditional probabilities matrices for increasing neighborhood sizes, the shape of variograms and the location of regional minima. We demonstrate that anisotropy and surface of series of `comet tails` associated with probability matrices, variograms slope and statistical indices, regional extrema location, are features able to discriminate areas with defects from areas without any. The classification is then realized by a neural network, which structure, properties and learning mechanisms are detailed. Finally we discuss the results. (authors). 21 refs., 5 figs.

  10. Classifier performance estimation under the constraint of a finite sample size: resampling schemes applied to neural network classifiers.

    Science.gov (United States)

    Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir

    2008-01-01

    In a practical classifier design problem the sample size is limited, and the available finite sample needs to be used both to design a classifier and to predict the classifier's performance for the true population. Since a larger sample is more representative of the population, it is advantageous to design the classifier with all the available cases, and to use a resampling technique for performance prediction. We conducted a Monte Carlo simulation study to compare the ability of different resampling techniques in predicting the performance of a neural network (NN) classifier designed with the available sample. We used the area under the receiver operating characteristic curve as the performance index for the NN classifier. We investigated resampling techniques based on the cross-validation, the leave-one-out method, and three different types of bootstrapping, namely, the ordinary, .632, and .632+ bootstrap. Our results indicated that, under the study conditions, there can be a large difference in the accuracy of the prediction obtained from different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited data set. PMID:18234468

  11. Bayesian calibration of car-following models

    NARCIS (Netherlands)

    Van Hinsbergen, C.P.IJ.; Van Lint, H.W.C.; Hoogendoorn, S.P.; Van Zuylen, H.J.

    2010-01-01

    Recent research has revealed that there exist large inter-driver differences in car-following behavior such that different car-following models may apply to different drivers. This study applies Bayesian techniques to the calibration of car-following models, where prior distributions on each model p

  12. Temporal Difference Learning for the Game Tic-Tac-Toe 3D: Applying Structure to Neural Networks

    NARCIS (Netherlands)

    van de Steeg, Michiel; Drugan, Madalina; Wiering, Marco

    2015-01-01

    When reinforcement learning is applied to large state spaces, such as those occurring in playing board games, the use of a good function approximator to learn to approximate the value function is very important. In previous research, multilayer perceptrons have often been quite successfully used as

  13. A Novel Approach for Image Recognition to Enhance the Quality of Decision Making by Applying Degree of Correlation Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Raju Dara

    2014-10-01

    Full Text Available Many diversified applications do exist in science & technology, which make use of the primary theory of a recognition phenomenon as one of its solutions. Recognition scenario is incorporated with a set of decisions and the action according to the decision purely relies on the quality of extracted information on utmost applications. Thus, the quality decision making absolutely reckons on processing momentum and precision which are entirely coupled with recognition methodology. In this article, a latest rule is formulated based on the degree of correlation to characterize the generalized recognition constraint and the application is explored with respect to image based information extraction. Machine learning based perception called feed forward architecture of Artificial Neural Network has been applied to attain the expected eminence of elucidation. The proposed method furnishes extraordinary advantages such as less memory requirements, extremely high level security for storing data, exceptional speed and gentle implementation approach.

  14. Bayesian Mediation Analysis

    Science.gov (United States)

    Yuan, Ying; MacKinnon, David P.

    2009-01-01

    In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…

  15. Neural Networks and Photometric Redshifts

    OpenAIRE

    Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo

    2002-01-01

    We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy...

  16. An introduction to Gaussian Bayesian networks.

    Science.gov (United States)

    Grzegorczyk, Marco

    2010-01-01

    The extraction of regulatory networks and pathways from postgenomic data is important for drug -discovery and development, as the extracted pathways reveal how genes or proteins regulate each other. Following up on the seminal paper of Friedman et al. (J Comput Biol 7:601-620, 2000), Bayesian networks have been widely applied as a popular tool to this end in systems biology research. Their popularity stems from the tractability of the marginal likelihood of the network structure, which is a consistent scoring scheme in the Bayesian context. This score is based on an integration over the entire parameter space, for which highly expensive computational procedures have to be applied when using more complex -models based on differential equations; for example, see (Bioinformatics 24:833-839, 2008). This chapter gives an introduction to reverse engineering regulatory networks and pathways with Gaussian Bayesian networks, that is Bayesian networks with the probabilistic BGe scoring metric [see (Geiger and Heckerman 235-243, 1995)]. In the BGe model, the data are assumed to stem from a Gaussian distribution and a normal-Wishart prior is assigned to the unknown parameters. Gaussian Bayesian network methodology for analysing static observational, static interventional as well as dynamic (observational) time series data will be described in detail in this chapter. Finally, we apply these Bayesian network inference methods (1) to observational and interventional flow cytometry (protein) data from the well-known RAF pathway to evaluate the global network reconstruction accuracy of Bayesian network inference and (2) to dynamic gene expression time series data of nine circadian genes in Arabidopsis thaliana to reverse engineer the unknown regulatory network topology for this domain. PMID:20824469

  17. The spatial prediction of landslide susceptibility applying artificial neural network and logistic regression models: A case study of Inje, Korea

    OpenAIRE

    Saro Lee; Woo Jeon Seong; Kwan-Young Oh; Moung-Jin Lee

    2016-01-01

    The aim of this study is to predict landslide susceptibility caused using the spatial analysis by the application of a statistical methodology based on the GIS. Logistic regression models along with artificial neutral network were applied and validated to analyze landslide susceptibility in Inje, Korea. Landslide occurrence area in the study were identified based on interpretations of optical remote sensing data (Aerial photographs) followed by field surveys. A spatial database considering fo...

  18. Evaluation of multilayer perceptron and self-organizing map neural network topologies applied on microstructure segmentation from metallographic images

    OpenAIRE

    de Albuquerque, Victor Hugo C.; Auzuir Ripardo de Alexandria; Paulo César Cortez; João Manuel R. S. Tavares

    2009-01-01

    Artificial neuronal networks have been used intensively in many domains to accomplish different computational tasks. One of these tasks is the segmentation of objects in images, like to segment microstructures from metallographic images, and for that goal several network topologies were proposed. This paper presents a comparative analysis between multilayer perceptron and selforganizing map topologies applied to segment microstructures from metallographic images. The multilayer perceptron neu...

  19. FUZZY CONTROLLER AND NEURAL ESTIMATOR APPLIED TO CONTROL A SYSTEM POWERED BY THREE-PHASE INDUCTION MOTOR

    OpenAIRE

    Élida Fernanda Xavier Júlio; Simplício Arnaud da Silva; Cícero da Rocha Souto

    2015-01-01

    In this study, a control strategy is presented to control the position and the feed rate of a table of a milling machine powered by three-phase induction motor, when machining pieces constituted by different types of materials: steel, brass and nylon. For development of the control strategy, the vector control technique was applied to drive the three-phase induction machines. The estimation of the electromagnetic torque of the motor was used to determine the machining feed rate fo...

  20. Holographic neural networks

    OpenAIRE

    Manger, R

    1998-01-01

    Holographic neural networks are a new and promising type of artificial neural networks. This article gives an overview of the holographic neural technology and its possibilities. The theoretical principles of holographic networks are first reviewed. Then, some other papers are presented, where holographic networks have been applied or experimentally evaluated. A case study dealing with currency exchange rate prediction is described in more detail.

  1. Bayesian Design Space applied to Pharmaceutical Development

    OpenAIRE

    Lebrun, Pierre

    2012-01-01

    Given the guidelines such as the Q8 document published by the International Conference on Harmonization (ICH), that describe the “Quality by Design” paradigm for the Pharmaceutical Development, the aim of this work is to provide a complete methodology addressing this problematic. As a result, various Design Spaces were obtained for different analytical methods and a manufacturing process. In Q8, Design Space has been defined as the “the multidimensional combination and interaction of input...

  2. Bayesian item selection in constrained adaptive testing using shadow tests

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specificati

  3. Bayesian Item Selection in Constrained Adaptive Testing Using Shadow Tests

    Science.gov (United States)

    Veldkamp, Bernard P.

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item…

  4. Bayesian regularisation methods in a hybrid MLP-HMM system.

    OpenAIRE

    Renals, Steve; MacKay, David

    1993-01-01

    We have applied Bayesian regularisation methods to multi-layer percepuon (MLP) training in the context of a hybrid MLP-HMM (hidden Markov model) continuous speech recognition system. The Bayesian framework adopted here allows an objective setting of the regularisation parameters, according to the training data. Experiments have been carried out on the ARPA Resource Management database.

  5. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    2000-01-01

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...

  6. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...

  7. Phase Transitions of Neural Networks

    OpenAIRE

    Kinzel, Wolfgang

    1997-01-01

    The cooperative behaviour of interacting neurons and synapses is studied using models and methods from statistical physics. The competition between training error and entropy may lead to discontinuous properties of the neural network. This is demonstrated for a few examples: Perceptron, associative memory, learning from examples, generalization, multilayer networks, structure recognition, Bayesian estimate, on-line training, noise estimation and time series generation.

  8. SOMBI: Bayesian identification of parameter relations in unstructured cosmological data

    CERN Document Server

    Frank, Philipp; Enßlin, Torsten A

    2016-01-01

    This work describes the implementation and application of a correlation determination method based on Self Organizing Maps and Bayesian Inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the Self Organizing Map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian Information Criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide ...

  9. Relating functional connectivity in V1 neural circuits and 3D natural scenes using Boltzmann machines.

    Science.gov (United States)

    Zhang, Yimeng; Li, Xiong; Samonds, Jason M; Lee, Tai Sing

    2016-03-01

    Bayesian theory has provided a compelling conceptualization for perceptual inference in the brain. Central to Bayesian inference is the notion of statistical priors. To understand the neural mechanisms of Bayesian inference, we need to understand the neural representation of statistical regularities in the natural environment. In this paper, we investigated empirically how statistical regularities in natural 3D scenes are represented in the functional connectivity of disparity-tuned neurons in the primary visual cortex of primates. We applied a Boltzmann machine model to learn from 3D natural scenes, and found that the units in the model exhibited cooperative and competitive interactions, forming a "disparity association field", analogous to the contour association field. The cooperative and competitive interactions in the disparity association field are consistent with constraints of computational models for stereo matching. In addition, we simulated neurophysiological experiments on the model, and found the results to be consistent with neurophysiological data in terms of the functional connectivity measurements between disparity-tuned neurons in the macaque primary visual cortex. These findings demonstrate that there is a relationship between the functional connectivity observed in the visual cortex and the statistics of natural scenes. They also suggest that the Boltzmann machine can be a viable model for conceptualizing computations in the visual cortex and, as such, can be used to predict neural circuits in the visual cortex from natural scene statistics. PMID:26712581

  10. Bayesian Image Reconstruction Based on Voronoi Diagrams

    CERN Document Server

    Cabrera, G F; Hitschfeld, N

    2007-01-01

    We present a Bayesian Voronoi image reconstruction technique (VIR) for interferometric data. Bayesian analysis applied to the inverse problem allows us to derive the a-posteriori probability of a novel parameterization of interferometric images. We use a variable Voronoi diagram as our model in place of the usual fixed pixel grid. A quantization of the intensity field allows us to calculate the likelihood function and a-priori probabilities. The Voronoi image is optimized including the number of polygons as free parameters. We apply our algorithm to deconvolve simulated interferometric data. Residuals, restored images and chi^2 values are used to compare our reconstructions with fixed grid models. VIR has the advantage of modeling the image with few parameters, obtaining a better image from a Bayesian point of view.

  11. Collaborative Kalman Filtration: Bayesian Perspective

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil

    Lisabon, Portugalsko: Institute for Systems and Technologies of Information, Control and Communication (INSTICC), 2014, s. 468-474. ISBN 978-989-758-039-0. [11th International Conference on Informatics in Control, Automation and Robotics - ICINCO 2014. Vien (AT), 01.09.2014-03.09.2014] R&D Projects: GA ČR(CZ) GP14-06678P Institutional support: RVO:67985556 Keywords : Bayesian analysis * Kalman filter * distributed estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/dedecius-0431324.pdf

  12. Practical Bayesian Tomography

    CERN Document Server

    Granade, Christopher; Cory, D G

    2015-01-01

    In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.

  13. Sediment classification using neural networks: An example from the site-U1344A of IODP Expedition 323 in the Bering Sea

    Science.gov (United States)

    Ojha, Maheswar; Maiti, Saumen

    2016-03-01

    A novel approach based on the concept of Bayesian neural network (BNN) has been implemented for classifying sediment boundaries using downhole log data obtained during Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. The Bayesian framework in conjunction with Markov Chain Monte Carlo (MCMC)/hybrid Monte Carlo (HMC) learning paradigm has been applied to constrain the lithology boundaries using density, density porosity, gamma ray, sonic P-wave velocity and electrical resistivity at the Hole U1344A. We have demonstrated the effectiveness of our supervised classification methodology by comparing our findings with a conventional neural network and a Bayesian neural network optimized by scaled conjugate gradient method (SCG), and tested the robustness of the algorithm in the presence of red noise in the data. The Bayesian results based on the HMC algorithm (BNN.HMC) resolve detailed finer structures at certain depths in addition to main lithology such as silty clay, diatom clayey silt and sandy silt. Our method also recovers the lithology information from a depth ranging between 615 and 655 m Wireline log Matched depth below Sea Floor of no core recovery zone. Our analyses demonstrate that the BNN based approach renders robust means for the classification of complex lithology successions at the Hole U1344A, which could be very useful for other studies and understanding the oceanic crustal inhomogeneity and structural discontinuities.

  14. Nonparametric Bayesian Logic

    OpenAIRE

    Carbonetto, Peter; Kisynski, Jacek; De Freitas, Nando; Poole, David L

    2012-01-01

    The Bayesian Logic (BLOG) language was recently developed for defining first-order probability models over worlds with unknown numbers of objects. It handles important problems in AI, including data association and population estimation. This paper extends BLOG by adopting generative processes over function spaces - known as nonparametrics in the Bayesian literature. We introduce syntax for reasoning about arbitrary collections of objects, and their properties, in an intuitive manner. By expl...

  15. Bayesian default probability models

    OpenAIRE

    Andrlíková, Petra

    2014-01-01

    This paper proposes a methodology for default probability estimation for low default portfolios, where the statistical inference may become troublesome. The author suggests using logistic regression models with the Bayesian estimation of parameters. The piecewise logistic regression model and Box-Cox transformation of credit risk score is used to derive the estimates of probability of default, which extends the work by Neagu et al. (2009). The paper shows that the Bayesian models are more acc...

  16. Inverse problems in the Bayesian framework

    International Nuclear Information System (INIS)

    The history of Bayesian methods dates back to the original works of Reverend Thomas Bayes and Pierre-Simon Laplace: the former laid down some of the basic principles on inverse probability in his classic article ‘An essay towards solving a problem in the doctrine of chances’ that was read posthumously in the Royal Society in 1763. Laplace, on the other hand, in his ‘Memoirs on inverse probability’ of 1774 developed the idea of updating beliefs and wrote down the celebrated Bayes’ formula in the form we know today. Although not identified yet as a framework for investigating inverse problems, Laplace used the formalism very much in the spirit it is used today in the context of inverse problems, e.g., in his study of the distribution of comets. With the evolution of computational tools, Bayesian methods have become increasingly popular in all fields of human knowledge in which conclusions need to be drawn based on incomplete and noisy data. Needless to say, inverse problems, almost by definition, fall into this category. Systematic work for developing a Bayesian inverse problem framework can arguably be traced back to the 1980s, (the original first edition being published by Elsevier in 1987), although articles on Bayesian methodology applied to inverse problems, in particular in geophysics, had appeared much earlier. Today, as testified by the articles in this special issue, the Bayesian methodology as a framework for considering inverse problems has gained a lot of popularity, and it has integrated very successfully with many traditional inverse problems ideas and techniques, providing novel ways to interpret and implement traditional procedures in numerical analysis, computational statistics, signal analysis and data assimilation. The range of applications where the Bayesian framework has been fundamental goes from geophysics, engineering and imaging to astronomy, life sciences and economy, and continues to grow. There is no question that Bayesian

  17. Fuzzy Neural Networks for water level and discharge forecasting

    Science.gov (United States)

    Alvisi, Stefano; Franchini, Marco

    2010-05-01

    A new procedure for water level (or discharge) forecasting under uncertainty using artificial neural networks is proposed: uncertainty is expressed in the form of a fuzzy number. For this purpose, the parameters of the neural network, namely, the weights and biases, are represented by fuzzy numbers rather than crisp numbers. Through the application of the extension principle, the fuzzy number representative of the output variable (water level or discharge) is then calculated at each time step on the basis of a set of crisp inputs and fuzzy parameters of the neural network. The proposed neural network thus allows uncertainty to be taken into account at the forecasting stage not providing only deterministic or crisp predictions, but rather predictions in terms of 'the discharge (or level) will fall between two values, indicated according to the level of credibility considered, whereas it will take on a certain value when the level of credibility is maximum'. The fuzzy parameters of the neural network are estimated using a calibration procedure that imposes a constraint whereby for an assigned h-level the envelope of the corresponding intervals representing the outputs (forecasted levels or discharges, calculated at different points in time) must include a prefixed percentage of observed values. The proposed model is applied to two different case studies. Specifically, the data related to the first case study are used to develop and test a flood event-based water level forecasting model, whereas the data related to the latter are used for continuous discharge forecasting. The results obtained are compared with those provided by other data-driven models - Bayesian neural networks (Neal, R.M. 1992, Bayesian training of backpropagation networks by the hybrid Monte Carlo method. Tech. Rep. CRG-TR-92-1, Dep. of Comput. Sci., Univ. of Toronto, Toronto, Ont., Canada.) and the Local Uncertainty Estimation Model (Shrestha D.L. and Solomatine D.P. 2006, Machine learning

  18. A Bayesian Approach to Interactive Retrieval

    Science.gov (United States)

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  19. Perfect Bayesian equilibrium. Part II: epistemic foundations

    OpenAIRE

    Bonanno, Giacomo

    2011-01-01

    In a companion paper we introduced a general notion of perfect Bayesian equilibrium which can be applied to arbitrary extensive-form games. The essential ingredient of the proposed definition is the qualitative notion of AGM-consistency. In this paper we provide an epistemic foundation for AGM-consistency based on the AGM theory of belief revision.

  20. Bayesian Methods for Radiation Detection and Dosimetry

    CERN Document Server

    Groer, Peter G

    2002-01-01

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed comp...

  1. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  2. Dimensionality reduction in Bayesian estimation algorithms

    Directory of Open Access Journals (Sweden)

    G. W. Petty

    2013-03-01

    Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  3. Bayesian Estimation of Thermonuclear Reaction Rates

    CERN Document Server

    Iliadis, Christian; Coc, Alain; Timmes, Frank; Starrfield, Sumner

    2016-01-01

    The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied in the past to this problem, all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extra-solar planets, gravitational waves, and type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present the first astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the d(p,$\\gamma$)$^3$He, $^3$He($^3$He,2p)$^4$He, and $^3$He($\\alpha$,$\\gamma$)$^7$Be reactions,...

  4. Bayesian and frequentist inequality tests

    OpenAIRE

    David M. Kaplan; Zhuo, Longhao

    2016-01-01

    Bayesian and frequentist criteria are fundamentally different, but often posterior and sampling distributions are asymptotically equivalent (and normal). We compare Bayesian and frequentist hypothesis tests of inequality restrictions in such cases. For finite-dimensional parameters, if the null hypothesis is that the parameter vector lies in a certain half-space, then the Bayesian test has (frequentist) size $\\alpha$; if the null hypothesis is any other convex subspace, then the Bayesian test...

  5. Comparison of Lauritzen-Spiegelhalter and successive restrictions algorithms for computing probability distributions in Bayesian networks

    Science.gov (United States)

    Smail, Linda

    2016-06-01

    The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.

  6. Development of general purpose neural network software as applied to fracture mechanics, earthquake engineering, miniature samples test data and other reactor applications

    International Nuclear Information System (INIS)

    Currently interpolation techniques are being used to predict the mechanics of crack in pipes of various sizes. To obviate the need for the interpolation techniques, which are not much accurate, the alternative technique of artificial neural network can be used

  7. Bayesian multiple target tracking

    CERN Document Server

    Streit, Roy L

    2013-01-01

    This second edition has undergone substantial revision from the 1999 first edition, recognizing that a lot has changed in the multiple target tracking field. One of the most dramatic changes is in the widespread use of particle filters to implement nonlinear, non-Gaussian Bayesian trackers. This book views multiple target tracking as a Bayesian inference problem. Within this framework it develops the theory of single target tracking, multiple target tracking, and likelihood ratio detection and tracking. In addition to providing a detailed description of a basic particle filter that implements

  8. BAYESIAN BICLUSTERING FOR PATIENT STRATIFICATION.

    Science.gov (United States)

    Khakabimamaghani, Sahand; Ester, Martin

    2016-01-01

    The move from Empirical Medicine towards Personalized Medicine has attracted attention to Stratified Medicine (SM). Some methods are provided in the literature for patient stratification, which is the central task of SM, however, there are still significant open issues. First, it is still unclear if integrating different datatypes will help in detecting disease subtypes more accurately, and, if not, which datatype(s) are most useful for this task. Second, it is not clear how we can compare different methods of patient stratification. Third, as most of the proposed stratification methods are deterministic, there is a need for investigating the potential benefits of applying probabilistic methods. To address these issues, we introduce a novel integrative Bayesian biclustering method, called B2PS, for patient stratification and propose methods for evaluating the results. Our experimental results demonstrate the superiority of B2PS over a popular state-of-the-art method and the benefits of Bayesian approaches. Our results agree with the intuition that transcriptomic data forms a better basis for patient stratification than genomic data. PMID:26776199

  9. Bayesian Geostatistical Design

    DEFF Research Database (Denmark)

    Diggle, Peter; Lophaven, Søren Nymand

    2006-01-01

    locations to, or deletion of locations from, an existing design, and prospective design, which consists of choosing positions for a new set of sampling locations. We propose a Bayesian design criterion which focuses on the goal of efficient spatial prediction whilst allowing for the fact that model...

  10. Bayesian Filters in Practice

    Czech Academy of Sciences Publication Activity Database

    Krejsa, Jiří; Věchet, S.

    Bratislava: Slovak University of Technology in Bratislava, 2010, s. 217-222. ISBN 978-80-227-3353-3. [Robotics in Education . Bratislava (SK), 16.09.2010-17.09.2010] Institutional research plan: CEZ:AV0Z20760514 Keywords : mobile robot localization * bearing only beacons * Bayesian filters Subject RIV: JD - Computer Applications, Robotics

  11. Bayesian Independent Component Analysis

    DEFF Research Database (Denmark)

    Winther, Ole; Petersen, Kaare Brandt

    2007-01-01

    In this paper we present an empirical Bayesian framework for independent component analysis. The framework provides estimates of the sources, the mixing matrix and the noise parameters, and is flexible with respect to choice of source prior and the number of sources and sensors. Inside the engine...

  12. Noncausal Bayesian Vector Autoregression

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a Bayesian inferential procedure for the noncausal vector autoregressive (VAR) model that is capable of capturing nonlinearities and incorporating effects of missing variables. In particular, we devise a fast and reliable posterior simulator that yields the predictive distribution as a...

  13. Bayesian logistic regression analysis

    NARCIS (Netherlands)

    Van Erp, H.R.N.; Van Gelder, P.H.A.J.M.

    2012-01-01

    In this paper we present a Bayesian logistic regression analysis. It is found that if one wishes to derive the posterior distribution of the probability of some event, then, together with the traditional Bayes Theorem and the integrating out of nuissance parameters, the Jacobian transformation is an

  14. Comment on 'Artificial neural network based modeling of heated catalytic converter performance' by M. Ali Akcayol and Can Cinar [Applied Thermal Engineering 25 (2005) 2341

    Energy Technology Data Exchange (ETDEWEB)

    Sha, W. [Metals Research Group, School of Planning, Architecture and Civil Engineering, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom)

    2007-02-15

    A paper has been published in Applied Thermal Engineering, using feed-forward artificial neural network (ANN) in the modeling of heated catalytic converter performance. The present paper attempts to discuss and comment on the paper. The amount of data used in the paper are not enough to determine the number of fitting parameters in the network. Therefore, the model is not mathematically sound or justified. The conclusion is that ANN modeling should be used with care and enough data. (author)

  15. On the use of back propagation and radial basis function neural networks in surface roughness prediction

    Science.gov (United States)

    Markopoulos, Angelos P.; Georgiopoulos, Sotirios; Manolakos, Dimitrios E.

    2016-03-01

    Various artificial neural networks types are examined and compared for the prediction of surface roughness in manufacturing technology. The aim of the study is to evaluate different kinds of neural networks and observe their performance and applicability on the same problem. More specifically, feed-forward artificial neural networks are trained with three different back propagation algorithms, namely the adaptive back propagation algorithm of the steepest descent with the use of momentum term, the back propagation Levenberg-Marquardt algorithm and the back propagation Bayesian algorithm. Moreover, radial basis function neural networks are examined. All the aforementioned algorithms are used for the prediction of surface roughness in milling, trained with the same input parameters and output data so that they can be compared. The advantages and disadvantages, in terms of the quality of the results, computational cost and time are identified. An algorithm for the selection of the spread constant is applied and tests are performed for the determination of the neural network with the best performance. The finally selected neural networks can satisfactorily predict the quality of the manufacturing process performed, through simulation and input-output surfaces for combinations of the input data, which correspond to milling cutting conditions.

  16. Echo State Network with Bayesian Regularization for Forecasting Short-Term Power Production of Small Hydropower Plants

    Directory of Open Access Journals (Sweden)

    Gang Li

    2015-10-01

    Full Text Available As a novel recurrent neural network (RNN, an echo state network (ESN that utilizes a reservoir with many randomly connected internal units and only trains the readout, avoids increased complexity of training procedures faced by traditional RNN. The ESN can cope with complex nonlinear systems because of its dynamical properties and has been applied in hydrological forecasting and load forecasting. Due to the linear regression algorithm usually adopted by generic ESN to train the output weights, an ill-conditioned solution might occur, degrading the generalization ability of the ESN. In this study, the ESN with Bayesian regularization (BESN is proposed for short-term power production forecasting of small hydropower (SHP plants. According to the Bayesian theory, the weights distribution in space is considered and the optimal output weights are obtained by maximizing the posterior probabilistic distribution. The evidence procedure is employed to gain optimal hyperparameters for the BESN model. The recorded data obtained from the SHP plants in two different counties, located in Yunnan Province, China, are utilized to validate the proposed model. For comparison, the feed-forward neural networks with Levenberg-Marquardt algorithm (LM-FNN and the generic ESN are also employed. The results indicate that BESN outperforms both LM-FNN and ESN.

  17. A tutorial on Bayesian Normal linear regression

    Science.gov (United States)

    Klauenberg, Katy; Wübbeler, Gerd; Mickan, Bodo; Harris, Peter; Elster, Clemens

    2015-12-01

    Regression is a common task in metrology and often applied to calibrate instruments, evaluate inter-laboratory comparisons or determine fundamental constants, for example. Yet, a regression model cannot be uniquely formulated as a measurement function, and consequently the Guide to the Expression of Uncertainty in Measurement (GUM) and its supplements are not applicable directly. Bayesian inference, however, is well suited to regression tasks, and has the advantage of accounting for additional a priori information, which typically robustifies analyses. Furthermore, it is anticipated that future revisions of the GUM shall also embrace the Bayesian view. Guidance on Bayesian inference for regression tasks is largely lacking in metrology. For linear regression models with Gaussian measurement errors this tutorial gives explicit guidance. Divided into three steps, the tutorial first illustrates how a priori knowledge, which is available from previous experiments, can be translated into prior distributions from a specific class. These prior distributions have the advantage of yielding analytical, closed form results, thus avoiding the need to apply numerical methods such as Markov Chain Monte Carlo. Secondly, formulas for the posterior results are given, explained and illustrated, and software implementations are provided. In the third step, Bayesian tools are used to assess the assumptions behind the suggested approach. These three steps (prior elicitation, posterior calculation, and robustness to prior uncertainty and model adequacy) are critical to Bayesian inference. The general guidance given here for Normal linear regression tasks is accompanied by a simple, but real-world, metrological example. The calibration of a flow device serves as a running example and illustrates the three steps. It is shown that prior knowledge from previous calibrations of the same sonic nozzle enables robust predictions even for extrapolations.

  18. Smartphones Get Emotional: Mind Reading Images and Reconstructing the Neural Sources

    DEFF Research Database (Denmark)

    Petersen, Michael Kai; Stahlhut, Carsten; Stopczynski, Arkadiusz;

    2011-01-01

    Combining a 14 channel neuroheadset with a smartphone to capture and process brain imaging data, we demonstrate the ability to distinguish among emotional responses re ected in dierent scalp potentials when viewing pleasant and unpleasant pictures compared to neutral content. Clustering independent...... components across subjects we are able to remove artifacts and identify common sources of synchronous brain activity, consistent with earlier ndings based on conventional EEG equipment. Applying a Bayesian approach to reconstruct the neural sources not only facilitates dierentiation of emotional responses...

  19. Bayesian item selection in constrained adaptive testing using shadow tests

    OpenAIRE

    Bernard P. Veldkamp

    2010-01-01

    Application of Bayesian item selection criteria in computerized adaptive testing might result in improvement of bias and MSE of the ability estimates. The question remains how to apply Bayesian item selection criteria in the context of constrained adaptive testing, where large numbers of specifications have to be taken into account in the item selection process. The Shadow Test Approach is a general purpose algorithm for administering constrained CAT. In this paper it is shown how the approac...

  20. Bayesian estimation of parameters in a regional hydrological model

    OpenAIRE

    Engeland, K.; Gottschalk, L.

    2002-01-01

    This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of a likelihood funct...

  1. Bayesian estimation of parameters in a regional hydrological model

    OpenAIRE

    Engeland, K.; Gottschalk, L.

    2002-01-01

    This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC) analysis. The Bayesian method requires formulation of ...

  2. Bayesian learning and the psychology of rule induction

    OpenAIRE

    Endress, A

    2013-01-01

    In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum’s (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with ...

  3. Proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology

    CERN Document Server

    Hortúa, Héctor J

    2014-01-01

    These are the proceedings of the First Astrostatistics School: Bayesian Methods in Cosmology, held in Bogot\\'a D.C., Colombia, June 9-13, 2014. The first astrostatistics school has been the first event in Colombia where statisticians and cosmologists from some universities in Bogot\\'a met to discuss the statistic methods applied to cosmology, especially the use of Bayesian statistics in the study of Cosmic Microwave Background (CMB), Baryonic Acoustic Oscillations (BAO), Large Scale Structure (LSS) and weak lensing.

  4. Bayesian Optimization in a Billion Dimensions via Random Embeddings

    OpenAIRE

    Wang, Ziyu; Hutter, Frank; Zoghi, Masrour; Matheson, David; De Freitas, Nando

    2013-01-01

    Bayesian optimization techniques have been successfully applied to robotics, planning, sensor placement, recommendation, advertising, intelligent user interfaces and automatic algorithm configuration. Despite these successes, the approach is restricted to problems of moderate dimension, and several workshops on Bayesian optimization have identified its scaling to high-dimensions as one of the holy grails of the field. In this paper, we introduce a novel random embedding idea to attack this pr...

  5. Kernel Approximate Bayesian Computation for Population Genetic Inferences

    OpenAIRE

    Nakagome, Shigeki; Fukumizu, Kenji; Mano, Shuhei

    2012-01-01

    Approximate Bayesian computation (ABC) is a likelihood-free approach for Bayesian inferences based on a rejection algorithm method that applies a tolerance of dissimilarity between summary statistics from observed and simulated data. Although several improvements to the algorithm have been proposed, none of these improvements avoid the following two sources of approximation: 1) lack of sufficient statistics: sampling is not from the true posterior density given data but from an approximate po...

  6. A Formal Definition of Perfect Bayesian Equilibrium for Extensive Games

    OpenAIRE

    Julio Gonzalez-Diaz; Miguel Melendez-Jimenez

    2007-01-01

    Often, perfect bayesian equilibrium is loosely defined by stating that players should be sequentially rational given some beliefs in which Bayes rule is applied “whenever possible”. We show that there are games in which it is not clear what “whenever possible” means. Then, we provide a simple definition of perfect bayesian equilibrium for general extensive games that refines both weak perfect equilibrium and subgame perfect equilibrium.

  7. Prediction of HPLC retention times of tebipenem pivoxyl and its degradation products in solid state by applying adaptive artificial neural network with recursive features elimination.

    Science.gov (United States)

    Mizera, Mikołaj; Talaczyńska, Alicja; Zalewski, Przemysław; Skibiński, Robert; Cielecka-Piontek, Judyta

    2015-05-01

    A sensitive and fast HPLC method using ultraviolet diode-array detector (DAD)/electrospray ionization tandem mass spectrometry (Q-TOF-MS/MS) was developed for the determination of tebipenem pivoxyl and in the presence of degradation products formed during thermolysis. The chromatographic separations were performed on stationary phases produced in core-shell technology with particle diameter of 5.0 µm. The mobile phases consisted of formic acid (0.1%) and acetonitrile at different ratios. The flow rate was 0.8 mL/min while the wavelength was set at 331 nm. The stability characteristics of tebipenem pivoxyl were studied by performing stress tests in the solid state in dry air (RH=0%) and at an increased relative air humidity (RH=90%). The validation parameters such as selectivity, accuracy, precision and sensitivity were found to be satisfying. The satisfied selectivity and precision of determination were obtained for the separation of tebipenem pivoxyl from its degradation products using a stationary phase with 5.0 µm particles. The evaluation of the chemical structure of the 9 degradation products of tebipenem pivoxyl was conducted following separation based on the stationary phase with a 5.0 µm particle size by applying a Q-TOF-MS/MS detector. The main degradation products of tebipenem pivoxyl were identified: a product resulting from the condensation of the substituents of 1-(4,5-dihydro-1,3-thiazol-2-yl)-3-azetidinyl]sulfanyl and acid and ester forms of tebipenem with an open β-lactam ring in dry air at an increased temperature (RH=0%, T=393 K) as well as acid and ester forms of tebipenem with an open β-lactam ring at an increased relative air humidity and an elevated temperature (RH=90%, T=333 K). Retention times of tebipenem pivoxyl and its degradation products were used as training data set for predictive model of quantitative structure-retention relationship. An artificial neural network with adaptation protocol and extensive feature selection process

  8. Bayesian networks in educational assessment

    CERN Document Server

    Almond, Russell G; Steinberg, Linda S; Yan, Duanli; Williamson, David M

    2015-01-01

    Bayesian inference networks, a synthesis of statistics and expert systems, have advanced reasoning under uncertainty in medicine, business, and social sciences. This innovative volume is the first comprehensive treatment exploring how they can be applied to design and analyze innovative educational assessments. Part I develops Bayes nets’ foundations in assessment, statistics, and graph theory, and works through the real-time updating algorithm. Part II addresses parametric forms for use with assessment, model-checking techniques, and estimation with the EM algorithm and Markov chain Monte Carlo (MCMC). A unique feature is the volume’s grounding in Evidence-Centered Design (ECD) framework for assessment design. This “design forward” approach enables designers to take full advantage of Bayes nets’ modularity and ability to model complex evidentiary relationships that arise from performance in interactive, technology-rich assessments such as simulations. Part III describes ECD, situates Bayes nets as ...

  9. Nonparametric Bayesian inference in biostatistics

    CERN Document Server

    Müller, Peter

    2015-01-01

    As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...

  10. Bayesian decoding using unsorted spikes in the rat hippocampus

    OpenAIRE

    Kloosterman, Fabian; Layton, Stuart P.; Chen, Zhe; Wilson, Matthew A

    2013-01-01

    A fundamental task in neuroscience is to understand how neural ensembles represent information. Population decoding is a useful tool to extract information from neuronal populations based on the ensemble spiking activity. We propose a novel Bayesian decoding paradigm to decode unsorted spikes in the rat hippocampus. Our approach uses a direct mapping between spike waveform features and covariates of interest and avoids accumulation of spike sorting errors. Our decoding paradigm is nonparametr...

  11. Feed-forward neural networks for shower recognition: construction and generalization

    International Nuclear Information System (INIS)

    Strictly layered feed-forward neural networks are explored as recognition tools for energy deposition patterns in a calorimeter. This study is motivated by possible applications for on-line event selection. Networks consisting of linear threshold units are generated by a constructive learning algorithm, the Patch algorithm. As a non-constructive counterpart the back-propagation algorithm is applied. This algorithm makes use of analogue neurons. The generalization capabilities of the neural networks resulting from both methods are compared to those of nearest-neighbour classifiers and of Probabilistic Neural Networks implementing Parzen-windows. The latter non-parametric statistical method is applied to estimate the optimal Bayesian classifier. For all methods the generalization capabilities are determined for different ways of pre-processing of the input data. The complexity of the feed-forward neural networks studied does not grow with the training set size. This favours a hardwired implementation of these neural networks as any implementation of the other two methods grows linearly with the training set size. ((orig.))

  12. The NIFTY way of Bayesian signal inference

    International Nuclear Information System (INIS)

    We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy

  13. Bayesians in Space: Using Bayesian Methods to Inform Choice of Spatial Weights Matrix in Hedonic Property Analyses

    OpenAIRE

    Mueller, Julie M.; Loomis, John B.

    2010-01-01

    The choice of weights is a non-nested problem in most applied spatial econometric models. Despite numerous recent advances in spatial econometrics, the choice of spatial weights remains exogenously determined by the researcher in empirical applications. Bayesian techniques provide statistical evidence regarding the simultaneous choice of model specification and spatial weights matrices by using posterior probabilities. This paper demonstrates the Bayesian estimation approach in a spatial hedo...

  14. Evaluation of a Partial Genome Screening of Two Asthma Susceptibility Regions Using Bayesian Network Based Bayesian Multilevel Analysis of Relevance

    OpenAIRE

    Ildikó Ungvári; Gábor Hullám; Péter Antal; Petra Sz Kiszel; András Gézsi; Éva Hadadi; Viktor Virág; Gergely Hajós; András Millinghoffer; Adrienne Nagy; András Kiss; Semsei, Ágnes F.; Gergely Temesi; Béla Melegh; Péter Kisfali

    2012-01-01

    Genetic studies indicate high number of potential factors related to asthma. Based on earlier linkage analyses we selected the 11q13 and 14q22 asthma susceptibility regions, for which we designed a partial genome screening study using 145 SNPs in 1201 individuals (436 asthmatic children and 765 controls). The results were evaluated with traditional frequentist methods and we applied a new statistical method, called bayesian network based bayesian multilevel analysis of relevance (BN-BMLA). Th...

  15. Bayesian data analysis in population ecology: motivations, methods, and benefits

    Science.gov (United States)

    Dorazio, Robert

    2016-01-01

    During the 20th century ecologists largely relied on the frequentist system of inference for the analysis of their data. However, in the past few decades ecologists have become increasingly interested in the use of Bayesian methods of data analysis. In this article I provide guidance to ecologists who would like to decide whether Bayesian methods can be used to improve their conclusions and predictions. I begin by providing a concise summary of Bayesian methods of analysis, including a comparison of differences between Bayesian and frequentist approaches to inference when using hierarchical models. Next I provide a list of problems where Bayesian methods of analysis may arguably be preferred over frequentist methods. These problems are usually encountered in analyses based on hierarchical models of data. I describe the essentials required for applying modern methods of Bayesian computation, and I use real-world examples to illustrate these methods. I conclude by summarizing what I perceive to be the main strengths and weaknesses of using Bayesian methods to solve ecological inference problems.

  16. Market Segmentation Using Bayesian Model Based Clustering

    OpenAIRE

    Van Hattum, P.

    2009-01-01

    This dissertation deals with two basic problems in marketing, that are market segmentation, which is the grouping of persons who share common aspects, and market targeting, which is focusing your marketing efforts on one or more attractive market segments. For the grouping of persons who share common aspects a Bayesian model based clustering approach is proposed such that it can be applied to data sets that are specifically used for market segmentation. The cluster algorithm can handle very l...

  17. Improving Environmental Scanning Systems Using Bayesian Networks

    OpenAIRE

    Simon Welter; Jörg H. Mayer; Reiner Quick

    2013-01-01

    As companies’ environment is becoming increasingly volatile, scanning systems gain in importance. We propose a hybrid process model for such systems' information gathering and interpretation tasks that combines quantitative information derived from regression analyses and qualitative knowledge from expert interviews. For the latter, we apply Bayesian networks. We derive the need for such a hybrid process model from a literature review. We lay out our model to find a suitable set of business e...

  18. Neural networks: genuine artifical intelligence. Neurale netwerken: echte kunstmatige intelligentie

    Energy Technology Data Exchange (ETDEWEB)

    Jongepier, A.G. (KEMA NV, Arnhem (Netherlands))

    Artificial neural networks are a new form of artificial intelligence. At this moment KEMA NV is examining the possibilities of applying artificial neural networks to processes that are related to power systems. A number of applications already gives hopeful results. Artificial neural networks are suited to pattern recognition. If a problem can be formulated in terms of pattern recognition, an artificial neural network may give a valuable contribution to the solution of this problem. 8 figs., 15 refs.

  19. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  20. Bayesian Magic in Asteroseismology

    Science.gov (United States)

    Kallinger, T.

    2015-09-01

    Only a few years ago asteroseismic observations were so rare that scientists had plenty of time to work on individual data sets. They could tune their algorithms in any possible way to squeeze out the last bit of information. Nowadays this is impossible. With missions like MOST, CoRoT, and Kepler we basically drown in new data every day. To handle this in a sufficient way statistical methods become more and more important. This is why Bayesian techniques started their triumph march across asteroseismology. I will go with you on a journey through Bayesian Magic Land, that brings us to the sea of granulation background, the forest of peakbagging, and the stony alley of model comparison.

  1. Bayesian Nonparametric Graph Clustering

    OpenAIRE

    Banerjee, Sayantan; Akbani, Rehan; Baladandayuthapani, Veerabhadran

    2015-01-01

    We present clustering methods for multivariate data exploiting the underlying geometry of the graphical structure between variables. As opposed to standard approaches that assume known graph structures, we first estimate the edge structure of the unknown graph using Bayesian neighborhood selection approaches, wherein we account for the uncertainty of graphical structure learning through model-averaged estimates of the suitable parameters. Subsequently, we develop a nonparametric graph cluster...

  2. Bayesian Benchmark Dose Analysis

    OpenAIRE

    Fang, Qijun; Piegorsch, Walter W.; Barnes, Katherine Y.

    2014-01-01

    An important objective in environmental risk assessment is estimation of minimum exposure levels, called Benchmark Doses (BMDs) that induce a pre-specified Benchmark Response (BMR) in a target population. Established inferential approaches for BMD analysis typically involve one-sided, frequentist confidence limits, leading in practice to what are called Benchmark Dose Lower Limits (BMDLs). Appeal to Bayesian modeling and credible limits for building BMDLs is far less developed, however. Indee...

  3. Heteroscedastic Treed Bayesian Optimisation

    OpenAIRE

    Assael, John-Alexander M.; Wang, Ziyu; Shahriari, Bobak; De Freitas, Nando

    2014-01-01

    Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stat...

  4. Efficient Bayesian Phase Estimation

    Science.gov (United States)

    Wiebe, Nathan; Granade, Chris

    2016-07-01

    We introduce a new method called rejection filtering that we use to perform adaptive Bayesian phase estimation. Our approach has several advantages: it is classically efficient, easy to implement, achieves Heisenberg limited scaling, resists depolarizing noise, tracks time-dependent eigenstates, recovers from failures, and can be run on a field programmable gate array. It also outperforms existing iterative phase estimation algorithms such as Kitaev's method.

  5. Bayesian Word Sense Induction

    OpenAIRE

    Brody, Samuel; Lapata, Mirella

    2009-01-01

    Sense induction seeks to automatically identify word senses directly from a corpus. A key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning. Sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word’s contexts into different classes, each representing a word sense. Our work places sense induction in a Bayesian context by modeling the contexts of the ambiguous word as samp...

  6. Adaptive learning via selectionism and Bayesianism, Part II: the sequential case.

    Science.gov (United States)

    Zhang, Jun

    2009-04-01

    Animals increase or decrease their future tendency of emitting an action based on whether performing such action has, in the past, resulted in positive or negative reinforcement. An analysis in the companion paper [Zhang, J. (2009). Adaptive learning via selectionism and Bayesianism. Part I: Connection between the two. Neural Networks, 22(3), 220-228] of such selectionist style of learning reveals a resemblance between its ensemble-level dynamics governing the change of action probability and Bayesian learning where evidence (in this case, reward) is distributively applied to all action alternatives. Here, this equivalence is further explored in solving the temporal credit-assignment problem during the learning of an action sequence ("operant chain"). Naturally emerging are the notion of secondary (conditioned) reinforcement predicting the average reward associated with a stimulus, and the notion of actor-critic architecture involving concurrent learning of both action probability and reward prediction. While both are consistent with solutions provided by contemporary reinforcement learning theory (Sutton & Barto, 1998) for optimizing sequential decision-making under stationary Markov environments, we investigate the effect of action learning on reward prediction when both are carried out concurrently in any on-line scheme. PMID:19395235

  7. Bayesian Attractor Learning

    Science.gov (United States)

    Wiegerinck, Wim; Schoenaker, Christiaan; Duane, Gregory

    2016-04-01

    Recently, methods for model fusion by dynamically combining model components in an interactive ensemble have been proposed. In these proposals, fusion parameters have to be learned from data. One can view these systems as parametrized dynamical systems. We address the question of learnability of dynamical systems with respect to both short term (vector field) and long term (attractor) behavior. In particular we are interested in learning in the imperfect model class setting, in which the ground truth has a higher complexity than the models, e.g. due to unresolved scales. We take a Bayesian point of view and we define a joint log-likelihood that consists of two terms, one is the vector field error and the other is the attractor error, for which we take the L1 distance between the stationary distributions of the model and the assumed ground truth. In the context of linear models (like so-called weighted supermodels), and assuming a Gaussian error model in the vector fields, vector field learning leads to a tractable Gaussian solution. This solution can then be used as a prior for the next step, Bayesian attractor learning, in which the attractor error is used as a log-likelihood term. Bayesian attractor learning is implemented by elliptical slice sampling, a sampling method for systems with a Gaussian prior and a non Gaussian likelihood. Simulations with a partially observed driven Lorenz 63 system illustrate the approach.

  8. Unbounded Bayesian Optimization via Regularization

    OpenAIRE

    Shahriari, Bobak; Bouchard-Côté, Alexandre; De Freitas, Nando

    2015-01-01

    Bayesian optimization has recently emerged as a popular and efficient tool for global optimization and hyperparameter tuning. Currently, the established Bayesian optimization practice requires a user-defined bounding box which is assumed to contain the optimizer. However, when little is known about the probed objective function, it can be difficult to prescribe such bounds. In this work we modify the standard Bayesian optimization framework in a principled way to allow automatic resizing of t...

  9. Bayesian optimization for materials design

    OpenAIRE

    Frazier, Peter I.; Wang, Jialei

    2015-01-01

    We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian pro...

  10. A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem

    CERN Document Server

    Jingpeng, Li

    2008-01-01

    A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurses assignment. Unlike our previous work that used Gas to implement implicit learning, the learning in the proposed algorithm is explicit, ie. Eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated, ie in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again usin...

  11. Bayesian classification and regression trees for predicting incidence of cryptosporidiosis.

    Directory of Open Access Journals (Sweden)

    Wenbiao Hu

    Full Text Available BACKGROUND: Classification and regression tree (CART models are tree-based exploratory data analysis methods which have been shown to be very useful in identifying and estimating complex hierarchical relationships in ecological and medical contexts. In this paper, a Bayesian CART model is described and applied to the problem of modelling the cryptosporidiosis infection in Queensland, Australia. METHODOLOGY/PRINCIPAL FINDINGS: We compared the results of a Bayesian CART model with those obtained using a Bayesian spatial conditional autoregressive (CAR model. Overall, the analyses indicated that the nature and magnitude of the effect estimates were similar for the two methods in this study, but the CART model more easily accommodated higher order interaction effects. CONCLUSIONS/SIGNIFICANCE: A Bayesian CART model for identification and estimation of the spatial distribution of disease risk is useful in monitoring and assessment of infectious diseases prevention and control.

  12. When the world becomes 'too real': A Bayesian explanation of autistic perception

    OpenAIRE

    Pellicano, L.; Burr, D.

    2012-01-01

    Perceptual experience is influenced both by incoming sensory information and prior knowledge about the world, a concept recently formalised within Bayesian decision theory. We propose that Bayesian models can be applied to autism – a neurodevelopmental condition with atypicalities in sensation and perception – to pinpoint fundamental differences in perceptual mechanisms. We suggest specifically that attenuated Bayesian priors – ‘hypo-priors’ – may be responsible for the unique perceptual expe...

  13. Bayesian Estimation for Generalized Exponential Distribution Based on Progressive Type-Ⅰ Interval Censoring

    Institute of Scientific and Technical Information of China (English)

    Xiu-yun PENG; Zai-zai YAN

    2013-01-01

    In this study,we consider the Bayesian estimation of unknown parameters and reliability function of the generalized exponential distribution based on progressive type-Ⅰ interval censoring.The Bayesian estimates of parameters and reliability function cannot be obtained as explicit forms by applying squared error loss and Linex loss functions,respectively; thus,we present the Lindley's approximation to discuss these estimations.Then,the Bayesian estimates are compared with the maximum likelihood estimates by using the Monte Carlo simulations.

  14. Common before-after accident study on a road site: a low-informative Bayesian method

    OpenAIRE

    Brenac, Thierry

    2009-01-01

    This note aims at providing a Bayesian methodological basis for routine before-after accident studies, often applied to a single road site, and in conditions of limited resources in terms of time and expertise. Methods: A low-informative Bayesian method is proposed for before-after accident studies using a comparison site or group of sites. As compared to conventional statistics, the Bayesian approach is less subject to misuse and misinterpretation by practitioners. The low-informative framew...

  15. Learning ground CP-logic theories by means of Bayesian network techniques

    OpenAIRE

    Meert, Wannes; Struyf, Jan; Blockeel, Hendrik

    2007-01-01

    Causal relationships are present in many application domains. CP-logic is a probabilistic modeling language that is especially designed to express such relationships. This paper investigates the learning of CP-theories from examples, and focusses on structure learning. The proposed approach is based on a transformation between CP-logic theories and Bayesian networks, that is, the method applies Bayesian network learning techniques to learn a CP-theory in the form of an equivalent Bayesian net...

  16. A neural network combined with a three-dimensional finite element method applied to optimize eddy current and temperature distributions of traveling wave induction heating system

    Science.gov (United States)

    Wang, Youhua; Wang, Junhua; Ho, S. L.; Pang, Lingling; Fu, W. N.

    2011-04-01

    In this paper, neural networks with a finite element method (FEM) were introduced to predict eddy current distributions on the continuously moving thin conducting strips in traveling wave induction heating (TWIH) equipments. A method that combines a neural network with a finite element method (FEM) is proposed to optimize eddy current distributions of TWIH heater. The trained network used for tested examples shows quite good accuracy of the prediction. The results have then been used with reference to a double-side TWIH in order to analyze the distributions of the magnetic field and eddy current intensity, which accelerates the iterative solution process for the nonlinear coupled electromagnetic matters. The FEM computation of temperature converged conspicuously faster using the prediction results as initial values than using the zero values, and the number of iterations is reduced dramatically. Simulation results demonstrate the effectiveness and characteristics of the proposed method.

  17. Learning Bayesian networks using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Chen Fei; Wang Xiufeng; Rao Yimei

    2007-01-01

    A new method to evaluate the fitness of the Bayesian networks according to the observed data is provided. The main advantage of this criterion is that it is suitable for both the complete and incomplete cases while the others not.Moreover it facilitates the computation greatly. In order to reduce the search space, the notation of equivalent class proposed by David Chickering is adopted. Instead of using the method directly, the novel criterion, variable ordering, and equivalent class are combined,moreover the proposed mthod avoids some problems caused by the previous one. Later, the genetic algorithm which allows global convergence, lack in the most of the methods searching for Bayesian network is applied to search for a good model in thisspace. To speed up the convergence, the genetic algorithm is combined with the greedy algorithm. Finally, the simulation shows the validity of the proposed approach.

  18. Machine learning a Bayesian and optimization perspective

    CERN Document Server

    Theodoridis, Sergios

    2015-01-01

    This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches, which rely on optimization techniques, as well as Bayesian inference, which is based on a hierarchy of probabilistic models. The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts. The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as shor...

  19. Bayesian image reconstruction: Application to emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Nunez, J.; Llacer, J.

    1989-02-01

    In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.

  20. Bayesian nonparametric data analysis

    CERN Document Server

    Müller, Peter; Jara, Alejandro; Hanson, Tim

    2015-01-01

    This book reviews nonparametric Bayesian methods and models that have proven useful in the context of data analysis. Rather than providing an encyclopedic review of probability models, the book’s structure follows a data analysis perspective. As such, the chapters are organized by traditional data analysis problems. In selecting specific nonparametric models, simpler and more traditional models are favored over specialized ones. The discussed methods are illustrated with a wealth of examples, including applications ranging from stylized examples to case studies from recent literature. The book also includes an extensive discussion of computational methods and details on their implementation. R code for many examples is included in on-line software pages.

  1. Computationally efficient Bayesian tracking

    Science.gov (United States)

    Aughenbaugh, Jason; La Cour, Brian

    2012-06-01

    In this paper, we describe the progress we have achieved in developing a computationally efficient, grid-based Bayesian fusion tracking system. In our approach, the probability surface is represented by a collection of multidimensional polynomials, each computed adaptively on a grid of cells representing state space. Time evolution is performed using a hybrid particle/grid approach and knowledge of the grid structure, while sensor updates use a measurement-based sampling method with a Delaunay triangulation. We present an application of this system to the problem of tracking a submarine target using a field of active and passive sonar buoys.

  2. Improved iterative Bayesian unfolding

    CERN Document Server

    D'Agostini, G

    2010-01-01

    This paper reviews the basic ideas behind a Bayesian unfolding published some years ago and improves their implementation. In particular, uncertainties are now treated at all levels by probability density functions and their propagation is performed by Monte Carlo integration. Thus, small numbers are better handled and the final uncertainty does not rely on the assumption of normality. Theoretical and practical issues concerning the iterative use of the algorithm are also discussed. The new program, implemented in the R language, is freely available, together with sample scripts to play with toy models.

  3. Top-of-the-atmosphere shortwave flux estimation from satellite observations: an empirical neural network approach applied with data from the A-train constellation

    Science.gov (United States)

    Gupta, Pawan; Joiner, Joanna; Vasilkov, Alexander; Bhartia, Pawan K.

    2016-07-01

    Estimates of top-of-the-atmosphere (TOA) radiative flux are essential for the understanding of Earth's energy budget and climate system. Clouds, aerosols, water vapor, and ozone (O3) are among the most important atmospheric agents impacting the Earth's shortwave (SW) radiation budget. There are several sensors in orbit that provide independent information related to these parameters. Having coincident information from these sensors is important for understanding their potential contributions. The A-train constellation of satellites provides a unique opportunity to analyze data from several of these sensors. In this paper, retrievals of cloud/aerosol parameters and total column ozone (TCO) from the Aura Ozone Monitoring Instrument (OMI) have been collocated with the Aqua Clouds and Earth's Radiant Energy System (CERES) estimates of total reflected TOA outgoing SW flux (SWF). We use these data to develop a variety of neural networks that estimate TOA SWF globally over ocean and land using only OMI data and other ancillary information as inputs and CERES TOA SWF as the output for training purposes. OMI-estimated TOA SWF from the trained neural networks reproduces independent CERES data with high fidelity. The global mean daily TOA SWF calculated from OMI is consistently within ±1 % of CERES throughout the year 2007. Application of our neural network method to other sensors that provide similar retrieved parameters, both past and future, can produce similar estimates TOA SWF. For example, the well-calibrated Total Ozone Mapping Spectrometer (TOMS) series could provide estimates of TOA SWF dating back to late 1978.

  4. Bayesian Inference on Gravitational Waves

    Directory of Open Access Journals (Sweden)

    Asad Ali

    2015-12-01

    Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an  overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.

  5. Adaptive Dynamic Bayesian Networks

    Energy Technology Data Exchange (ETDEWEB)

    Ng, B M

    2007-10-26

    A discrete-time Markov process can be compactly modeled as a dynamic Bayesian network (DBN)--a graphical model with nodes representing random variables and directed edges indicating causality between variables. Each node has a probability distribution, conditional on the variables represented by the parent nodes. A DBN's graphical structure encodes fixed conditional dependencies between variables. But in real-world systems, conditional dependencies between variables may be unknown a priori or may vary over time. Model errors can result if the DBN fails to capture all possible interactions between variables. Thus, we explore the representational framework of adaptive DBNs, whose structure and parameters can change from one time step to the next: a distribution's parameters and its set of conditional variables are dynamic. This work builds on recent work in nonparametric Bayesian modeling, such as hierarchical Dirichlet processes, infinite-state hidden Markov networks and structured priors for Bayes net learning. In this paper, we will explain the motivation for our interest in adaptive DBNs, show how popular nonparametric methods are combined to formulate the foundations for adaptive DBNs, and present preliminary results.

  6. Bayesian analysis toolkit - BAT

    International Nuclear Information System (INIS)

    Statistical treatment of data is an essential part of any data analysis and interpretation. Different statistical methods and approaches can be used, however the implementation of these approaches is complicated and at times inefficient. The Bayesian analysis toolkit (BAT) is a software package developed in C++ framework that facilitates the statistical analysis of the data using Bayesian theorem. The tool evaluates the posterior probability distributions for models and their parameters using Markov Chain Monte Carlo which in turn provide straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as simulated annealing, allow extraction of the global mode of the posterior. BAT sets a well-tested environment for flexible model definition and also includes a set of predefined models for standard statistical problems. The package is interfaced to other software packages commonly used in high energy physics, such as ROOT, Minuit, RooStats and CUBA. We present a general overview of BAT and its algorithms. A few physics examples are shown to introduce the spectrum of its applications. In addition, new developments and features are summarized.

  7. Bayesian network learning for natural hazard assessments

    Science.gov (United States)

    Vogel, Kristin

    2016-04-01

    Even though quite different in occurrence and consequences, from a modelling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding. On top of the uncertainty about the modelling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Thus, for reliable natural hazard assessments it is crucial not only to capture and quantify involved uncertainties, but also to express and communicate uncertainties in an intuitive way. Decision-makers, who often find it difficult to deal with uncertainties, might otherwise return to familiar (mostly deterministic) proceedings. In the scope of the DFG research training group „NatRiskChange" we apply the probabilistic framework of Bayesian networks for diverse natural hazard and vulnerability studies. The great potential of Bayesian networks was already shown in previous natural hazard assessments. Treating each model component as random variable, Bayesian networks aim at capturing the joint distribution of all considered variables. Hence, each conditional distribution of interest (e.g. the effect of precautionary measures on damage reduction) can be inferred. The (in-)dependencies between the considered variables can be learned purely data driven or be given by experts. Even a combination of both is possible. By translating the (in-)dependences into a graph structure, Bayesian networks provide direct insights into the workings of the system and allow to learn about the underlying processes. Besides numerous studies on the topic, learning Bayesian networks from real-world data remains challenging. In previous studies, e.g. on earthquake induced ground motion and flood damage assessments, we tackled the problems arising with continuous variables

  8. Neural networks and statistical learning

    CERN Document Server

    Du, Ke-Lin

    2014-01-01

    Providing a broad but in-depth introduction to neural network and machine learning in a statistical framework, this book provides a single, comprehensive resource for study and further research. All the major popular neural network models and statistical learning approaches are covered with examples and exercises in every chapter to develop a practical working understanding of the content. Each of the twenty-five chapters includes state-of-the-art descriptions and important research results on the respective topics. The broad coverage includes the multilayer perceptron, the Hopfield network, associative memory models, clustering models and algorithms, the radial basis function network, recurrent neural networks, principal component analysis, nonnegative matrix factorization, independent component analysis, discriminant analysis, support vector machines, kernel methods, reinforcement learning, probabilistic and Bayesian networks, data fusion and ensemble learning, fuzzy sets and logic, neurofuzzy models, hardw...

  9. Bayesian feature weighting for unsupervised learning, with application to object recognition

    OpenAIRE

    Carbonetto, Peter; De Freitas, Nando; Gustafson, Paul; Thompson, Natalie

    2003-01-01

    We present a method for variable selection/weighting in an unsupervised learning context using Bayesian shrinkage. The basis for the model parameters and cluster assignments can be computed simultaneous using an efficient EM algorithm. Applying our Bayesian shrinkage model to a complex problem in object recognition (Duygulu, Barnard, de Freitas and Forsyth 2002), our experiments yied good results.

  10. Paraconsistents artificial neural networks applied to the study of mutational patterns of the F subtype of the viral strains of HIV-1 to antiretroviral therapy.

    Science.gov (United States)

    Santos, Paulo C C Dos; Lopes, Helder F S; Alcalde, Rosana; Gonsalez, Cláudio R; Abe, Jair M; Lopez, Luis F

    2016-03-01

    The high variability of HIV-1 as well as the lack of efficient repair mechanisms during the stages of viral replication, contribute to the rapid emergence of HIV-1 strains resistant to antiretroviral drugs. The selective pressure exerted by the drug leads to fixation of mutations capable of imparting varying degrees of resistance. The presence of these mutations is one of the most important factors in the failure of therapeutic response to medications. Thus, it is of critical to understand the resistance patterns and mechanisms associated with them, allowing the choice of an appropriate therapeutic scheme, which considers the frequency, and other characteristics of mutations. Utilizing Paraconsistents Artificial Neural Networks, seated in Paraconsistent Annotated Logic Et which has the capability of measuring uncertainties and inconsistencies, we have achieved levels of agreement above 90% when compared to the methodology proposed with the current methodology used to classify HIV-1 subtypes. The results demonstrate that Paraconsistents Artificial Neural Networks can serve as a promising tool of analysis. PMID:26959313

  11. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... squares has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training. Finally, a predictive closed-loop control strategy based on a so-called single-neuron self...

  12. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...... squares has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training. Finally, a predictive closed-loop control strategy based on a so-called single-neuron self...

  13. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  14. Bayesian Methods for Radiation Detection and Dosimetry

    International Nuclear Information System (INIS)

    We performed work in three areas: radiation detection, external and internal radiation dosimetry. In radiation detection we developed Bayesian techniques to estimate the net activity of high and low activity radioactive samples. These techniques have the advantage that the remaining uncertainty about the net activity is described by probability densities. Graphs of the densities show the uncertainty in pictorial form. Figure 1 below demonstrates this point. We applied stochastic processes for a method to obtain Bayesian estimates of 222Rn-daughter products from observed counting rates. In external radiation dosimetry we studied and developed Bayesian methods to estimate radiation doses to an individual with radiation induced chromosome aberrations. We analyzed chromosome aberrations after exposure to gammas and neutrons and developed a method for dose-estimation after criticality accidents. The research in internal radiation dosimetry focused on parameter estimation for compartmental models from observed compartmental activities. From the estimated probability densities of the model parameters we were able to derive the densities for compartmental activities for a two compartment catenary model at different times. We also calculated the average activities and their standard deviation for a simple two compartment model

  15. Neural network of Gaussian radial basis functions applied to the problem of identification of nuclear accidents in a PWR nuclear power plant

    International Nuclear Information System (INIS)

    Highlights: • It is presented a new method based on Artificial Neural Network (ANN) developed to deal with accident identification in PWR nuclear power plants. • Obtained results have shown the efficiency of the referred technique. • Results obtained with this method are as good as or even better to similar optimization tools available in the literature. - Abstract: The task of monitoring a nuclear power plant consists on determining, continuously and in real time, the state of the plant’s systems in such a way to give indications of abnormalities to the operators and enable them to recognize anomalies in system behavior. The monitoring is based on readings of a large number of meters and alarm indicators which are located in the main control room of the facility. On the occurrence of a transient or of an accident on the nuclear power plant, even the most experienced operators can be confronted with conflicting indications due to the interactions between the various components of the plant systems; since a disturbance of a system can cause disturbances on another plant system, thus the operator may not be able to distinguish what is cause and what is the effect. This cognitive overload, to which operators are submitted, causes a difficulty in understanding clearly the indication of an abnormality in its initial phase of development and in taking the appropriate and immediate corrective actions to face the system failure. With this in mind, computerized monitoring systems based on artificial intelligence that could help the operators to detect and diagnose these failures have been devised and have been the subject of research. Among the techniques that can be used in such development, radial basis functions (RBFs) neural networks play an important role due to the fact that they are able to provide good approximations to functions of a finite number of real variables. This paper aims to present an application of a neural network of Gaussian radial basis

  16. Satellite retrieval of aerosol microphysical and optical parameters using neural networks: a new methodology applied to the Sahara desert dust peak

    Science.gov (United States)

    Taylor, M.; Kazadzis, S.; Tsekeri, A.; Gkikas, A.; Amiridis, V.

    2014-09-01

    In order to exploit the full-earth viewing potential of satellite instruments to globally characterise aerosols, new algorithms are required to deduce key microphysical parameters like the particle size distribution and optical parameters associated with scattering and absorption from space remote sensing data. Here, a methodology based on neural networks is developed to retrieve such parameters from satellite inputs and to validate them with ground-based remote sensing data. For key combinations of input variables available from the MODerate resolution Imaging Spectro-radiometer (MODIS) and the Ozone Measuring Instrument (OMI) Level 3 data sets, a grid of 100 feed-forward neural network architectures is produced, each having a different number of neurons and training proportion. The networks are trained with principal components accounting for 98% of the variance of the inputs together with principal components formed from 38 AErosol RObotic NETwork (AERONET) Level 2.0 (Version 2) retrieved parameters as outputs. Daily averaged, co-located and synchronous data drawn from a cluster of AERONET sites centred on the peak of dust extinction in Northern Africa is used for network training and validation, and the optimal network architecture for each input parameter combination is identified with reference to the lowest mean squared error. The trained networks are then fed with unseen data at the coastal dust site Dakar to test their simulation performance. A neural network (NN), trained with co-located and synchronous satellite inputs comprising three aerosol optical depth measurements at 470, 550 and 660 nm, plus the columnar water vapour (from MODIS) and the modelled absorption aerosol optical depth at 500 nm (from OMI), was able to simultaneously retrieve the daily averaged size distribution, the coarse mode volume, the imaginary part of the complex refractive index, and the spectral single scattering albedo - with moderate precision: correlation coefficients in the

  17. Book review: Bayesian analysis for population ecology

    Science.gov (United States)

    Link, William A.

    2011-01-01

    Brian Dennis described the field of ecology as “fertile, uncolonized ground for Bayesian ideas.” He continued: “The Bayesian propagule has arrived at the shore. Ecologists need to think long and hard about the consequences of a Bayesian ecology. The Bayesian outlook is a successful competitor, but is it a weed? I think so.” (Dennis 2004)

  18. Analysis of Gumbel Model for Software Reliability Using Bayesian Paradigm

    Directory of Open Access Journals (Sweden)

    Raj Kumar

    2012-12-01

    Full Text Available In this paper, we have illustrated the suitability of Gumbel Model for software reliability data. The model parameters are estimated using likelihood based inferential procedure: classical as well as Bayesian. The quasi Newton-Raphson algorithm is applied to obtain the maximum likelihood estimates and associated probability intervals. The Bayesian estimates of the parameters of Gumbel model are obtained using Markov Chain Monte Carlo(MCMC simulation method in OpenBUGS(established software for Bayesian analysis using Markov Chain Monte Carlo methods. The R functions are developed to study the statistical properties, model validation and comparison tools of the model and the output analysis of MCMC samples generated from OpenBUGS. Details of applying MCMC to parameter estimation for the Gumbel model are elaborated and a real software reliability data set is considered to illustrate the methods of inference discussed in this paper.

  19. Recurrent Neural Network Regularization

    OpenAIRE

    Zaremba, Wojciech; Sutskever, Ilya; Vinyals, Oriol

    2014-01-01

    We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.

  20. Ocean wave forecasting using recurrent neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    , merchant vessel routing, nearshore construction, etc. more efficiently and safely. This paper describes an artificial neural network, namely recurrent neural network with rprop update algorithm and is applied for wave forecasting. Measured ocean waves off...

  1. BAYESIAN APPROACH OF DECISION PROBLEMS

    Directory of Open Access Journals (Sweden)

    DRAGOŞ STUPARU

    2010-01-01

    Full Text Available Management is nowadays a basic vector of economic development, a concept frequently used in our country as well as all over the world. Indifferently of the hierarchical level at which the managerial process is manifested, decision represents its essential moment, the supreme act of managerial activity. Its can be met in all fields of activity, practically having an unlimited degree of coverage, and in all the functions of management. It is common knowledge that the activity of any type of manger, no matter the hierarchical level he occupies, represents a chain of interdependent decisions, their aim being the elimination or limitation of the influence of disturbing factors that may endanger the achievement of predetermined objectives, and the quality of managerial decisions condition the progress and viability of any enterprise. Therefore, one of the principal characteristics of a successful manager is his ability to adopt the most optimal decisions of high quality. The quality of managerial decisions are conditioned by the manager’s general level of education and specialization, the manner in which they are preoccupied to assimilate the latest information and innovations in the domain of management’s theory and practice and the applying of modern managerial methods and techniques in the activity of management. We are presenting below the analysis of decision problems in hazardous conditions in terms of Bayesian theory – a theory that uses the probabilistic calculus.

  2. PAC-Bayesian Analysis of Martingales and Multiarmed Bandits

    CERN Document Server

    Seldin, Yevgeny; Shawe-Taylor, John; Peters, Jan; Auer, Peter

    2011-01-01

    We present two alternative ways to apply PAC-Bayesian analysis to sequences of dependent random variables. The first is based on a new lemma that enables to bound expectations of convex functions of certain dependent random variables by expectations of the same functions of independent Bernoulli random variables. This lemma provides an alternative tool to Hoeffding-Azuma inequality to bound concentration of martingale values. Our second approach is based on integration of Hoeffding-Azuma inequality with PAC-Bayesian analysis. We also introduce a way to apply PAC-Bayesian analysis in situation of limited feedback. We combine the new tools to derive PAC-Bayesian generalization and regret bounds for the multiarmed bandit problem. Although our regret bound is not yet as tight as state-of-the-art regret bounds based on other well-established techniques, our results significantly expand the range of potential applications of PAC-Bayesian analysis and introduce a new analysis tool to reinforcement learning and many ...

  3. Diffusion filtration with approximate Bayesian computation

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Djurić, P. M.

    Piscataway: IEEE Computer Society, 2015, s. 3207-3211. ISBN 978-1-4673-6997-8. ISSN 1520-6149. [2015 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2015). Brisbane (AU), 19.05.2015-24.05.2015] R&D Projects: GA ČR(CZ) GP14-06678P Institutional support: RVO:67985556 Keywords : Bayesian filtration * diffusion * distributed filtration Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/AS/dedecius-0443931.pdf

  4. Using imsets for learning Bayesian networks

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří; Studený, Milan

    Praha : UTIA AV ČR, 2007 - (Kroupa, T.; Vejnarová, J.), s. 178-189 [Czech-Japan Seminar on Data Analysis and Decision Making under Uncertainty /10./. Liblice (CZ), 15.09.2007-18.09.2007] R&D Projects: GA MŠk(CZ) 1M0572 Grant ostatní: GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Bayesian networks * artificial intelligence * probabilistic graphical models * machine learning Subject RIV: BB - Applied Statistics, Operational Research

  5. Bayesian logistic betting strategy against probability forecasting

    CERN Document Server

    Kumon, Masayuki; Takemura, Akimichi; Takeuchi, Kei

    2012-01-01

    We propose a betting strategy based on Bayesian logistic regression modeling for the probability forecasting game in the framework of game-theoretic probability by Shafer and Vovk (2001). We prove some results concerning the strong law of large numbers in the probability forecasting game with side information based on our strategy. We also apply our strategy for assessing the quality of probability forecasting by the Japan Meteorological Agency. We find that our strategy beats the agency by exploiting its tendency of avoiding clear-cut forecasts.

  6. Bayesian Methods and Universal Darwinism

    OpenAIRE

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of...

  7. Portfolio Allocation for Bayesian Optimization

    OpenAIRE

    Brochu, Eric; Hoffman, Matthew W.; De Freitas, Nando

    2010-01-01

    Bayesian optimization with Gaussian processes has become an increasingly popular tool in the machine learning community. It is efficient and can be used when very little is known about the objective function, making it popular in expensive black-box optimization scenarios. It uses Bayesian methods to sample the objective efficiently using an acquisition function which incorporates the model's estimate of the objective and the uncertainty at any given point. However, there are several differen...

  8. Neuronanatomy, neurology and Bayesian networks

    OpenAIRE

    Bielza Lozoya, Maria Concepcion

    2014-01-01

    Bayesian networks are data mining models with clear semantics and a sound theoretical foundation. In this keynote talk we will pinpoint a number of neuroscience problems that can be addressed using Bayesian networks. In neuroanatomy, we will show computer simulation models of dendritic trees and classification of neuron types, both based on morphological features. In neurology, we will present the search for genetic biomarkers in Alzheimer's disease and the prediction of health-related qualit...

  9. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868. ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  10. Bayesian multi-QTL mapping for growth curve parameters

    DEFF Research Database (Denmark)

    Heuven, Henri C M; Janss, Luc L G

    2010-01-01

    segregating QTL using a Bayesian algorithm. Results For each individual a logistic growth curve was fitted and three latent variables: asymptote (ASYM), inflection point (XMID) and scaling factor (SCAL) were estimated per individual. Applying an 'animal' model showed heritabilities of approximately 48% for...

  11. Exploiting sensitivity analysis in Bayesian networks for consumer satisfaction study

    NARCIS (Netherlands)

    Jaronski, W.; Bloemer, J.M.M.; Vanhoof, K.; Wets, G.

    2004-01-01

    The paper presents an application of Bayesian network technology in a empirical customer satisfaction study. The findings of the study should provide insight as to the importance of product/service dimensions in terms of the strength of their influence on overall satisfaction. To this end we apply a

  12. Handbook of neural computing applications

    Energy Technology Data Exchange (ETDEWEB)

    Parten, C.; Hartson, C.; Maren, A. (Tennessee Univ., Chattanooga, TN (USA)); Pap, R. (Accurate Automation Corp., Chattanooga, TN (US))

    1990-01-01

    Here is a comprehensive guide to architectures, processes, implementation methods, and applications of neural computing systems. Unlike purely theoretical books, this handbook shows how to apply neural processing systems to problems in neurophysiology, control theories, learning theory, pattern recognition, and similar areas. This book discusses neural network theories, and shows where they came from, how they can be used, and how they can be developed for future applications.

  13. Bayesian Interpretations of Heteroskedastic Consistent Covariance Estimators Using the Informed Bayesian Bootstrap

    OpenAIRE

    Dale Poirier

    2008-01-01

    This paper provides Bayesian rationalizations for White’s heteroskedastic consistent (HC) covariance estimator and various modifications of it. An informed Bayesian bootstrap provides the statistical framework.

  14. A bootstrapped neural network model applied to prediction of the biodegradation rate of reactive Black 5 dye - doi: 10.4025/actascitechnol.v35i3.16210

    Directory of Open Access Journals (Sweden)

    Kleber Rogério Moreira Prado

    2013-06-01

    Full Text Available Current essay forwards a biodegradation model of a dye, used in the textile industry, based on a neural network propped by bootstrap remodeling. Bootstrapped neural network is set to generate estimates that are close to results obtained in an intrinsic experience in which a chemical process is applied. Pseudomonas oleovorans was used in the biodegradation of reactive Black 5. Results show a brief comparison between the information estimated by the proposed approach and the experimental data, with a coefficient of correlation between real and predicted values for a more than 0.99 biodegradation rate. Dye concentration and the solution’s pH failed to interfere in biodegradation index rates. A value above 90% of dye biodegradation was achieved between 1.000 and 1.841 mL 10 mL-1 of microorganism concentration and between 1.000 and 2.000 g 100 mL-1 of glucose concentration within the experimental conditions under analysis.   

  15. Baltic sea algae analysis using Bayesian spatial statistics methods

    Directory of Open Access Journals (Sweden)

    Eglė Baltmiškytė

    2013-03-01

    Full Text Available Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential are built to estimate the best model fitting for algae quantity prediction. Unknown model parameters are estimated and Bayesian kriging prediction posterior distribution is computed in OpenBUGS modeling environment by using Bayesian spatial statistics methods.

  16. Bayesian Inference in the Modern Design of Experiments

    Science.gov (United States)

    DeLoach, Richard

    2008-01-01

    This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.

  17. FRUITS for Fish: intake estimates of aquatic foods using a novel Bayesian model

    Czech Academy of Sciences Publication Activity Database

    Fernandes, R.; Brabec, Marek; Millard, A.; Nadeau, J.M.; Grootes, P.M.

    University of Ghent, 2013. s. 110-111. [International Symposium 14C & Archaeology /14./. 08.04.2013-12.04.2013, Ghent] Institutional support: RVO:67985807 Keywords : Bayesian estimation * food consuption * archeology Subject RIV: BB - Applied Statistics, Operational Research

  18. Nonparametric Bayesian Classification

    CERN Document Server

    Coram, M A

    2002-01-01

    A Bayesian approach to the classification problem is proposed in which random partitions play a central role. It is argued that the partitioning approach has the capacity to take advantage of a variety of large-scale spatial structures, if they are present in the unknown regression function $f_0$. An idealized one-dimensional problem is considered in detail. The proposed nonparametric prior uses random split points to partition the unit interval into a random number of pieces. This prior is found to provide a consistent estimate of the regression function in the $\\L^p$ topology, for any $1 \\leq p < \\infty$, and for arbitrary measurable $f_0:[0,1] \\rightarrow [0,1]$. A Markov chain Monte Carlo (MCMC) implementation is outlined and analyzed. Simulation experiments are conducted to show that the proposed estimate compares favorably with a variety of conventional estimators. A striking resemblance between the posterior mean estimate and the bagged CART estimate is noted and discussed. For higher dimensions, a ...

  19. BAT - Bayesian Analysis Toolkit

    International Nuclear Information System (INIS)

    One of the most vital steps in any data analysis is the statistical analysis and comparison with the prediction of a theoretical model. The many uncertainties associated with the theoretical model and the observed data require a robust statistical analysis tool. The Bayesian Analysis Toolkit (BAT) is a powerful statistical analysis software package based on Bayes' Theorem, developed to evaluate the posterior probability distribution for models and their parameters. It implements Markov Chain Monte Carlo to get the full posterior probability distribution that in turn provides a straightforward parameter estimation, limit setting and uncertainty propagation. Additional algorithms, such as Simulated Annealing, allow to evaluate the global mode of the posterior. BAT is developed in C++ and allows for a flexible definition of models. A set of predefined models covering standard statistical cases are also included in BAT. It has been interfaced to other commonly used software packages such as ROOT, Minuit, RooStats and CUBA. An overview of the software and its algorithms is provided along with several physics examples to cover a range of applications of this statistical tool. Future plans, new features and recent developments are briefly discussed.

  20. A comparison of back propagation and generalized regression neural networks performance in neutron spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico); Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The process of unfolding the neutron energy spectrum has been the subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in the neutron spectrometry and dosimetry domains, however, the structure and the learning parameters are factors that contribute in a significant way in the networks performance. In artificial neural network domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, which mean require no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training with the development data. In the network development phase, the only hurdle is to tune the hyper parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained can be observed that despite the very similar results, GRNN performs better than BPNN. (Author)

  1. A comparison of back propagation and generalized regression neural networks performance in neutron spectrometry

    International Nuclear Information System (INIS)

    The process of unfolding the neutron energy spectrum has been the subject of research for many years. Monte Carlo, iterative methods, the bayesian theory, the principle of maximum entropy are some of the methods used. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Back Propagation Neural Networks (BPNN), have been applied with success in the neutron spectrometry and dosimetry domains, however, the structure and the learning parameters are factors that contribute in a significant way in the networks performance. In artificial neural network domain, Generalized Regression Neural Network (GRNN) is one of the simplest neural networks in term of network architecture and learning algorithm. The learning is instantaneous, which mean require no time for training. Opposite to BPNN, a GRNN would be formed instantly with just a 1-pass training with the development data. In the network development phase, the only hurdle is to tune the hyper parameter, which is known as sigma, governing the smoothness of the network. The aim of this work was to compare the performance of BPNN and GRNN in the solution of the neutron spectrometry problem. From results obtained can be observed that despite the very similar results, GRNN performs better than BPNN. (Author)

  2. Learning Functions and Approximate Bayesian Computation Design: ABCD

    Directory of Open Access Journals (Sweden)

    Markus Hainy

    2014-08-01

    Full Text Available A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational method is described to solve generalized optimal experimental design problems arising from the learning framework based on a version of the well-known approximate Bayesian computation (ABC method for carrying out the Bayesian analysis based on Monte Carlo simulation. Some simple examples are given.

  3. The subjectivity of scientists and the Bayesian approach

    CERN Document Server

    Press, James S

    2016-01-01

    "Press and Tanur argue that subjectivity has not only played a significant role in the advancement of science but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the more classical twentieth-century methods." — SciTech Book News. "An insightful work." ― Choice. "Compilation of interesting popular problems … this book is fascinating." — Short Book Reviews, International Statistical Institute. Subjectivity ― including intuition, hunches, and personal beliefs ― has played a key role in scientific discovery. This intriguing book illustrates subjective influences on scientific progress with historical accounts and biographical sketches of more than a dozen luminaries, including Aristotle, Galileo, Newton, Darwin, Pasteur, Freud, Einstein, Margaret Mead, and others. The treatment also offers a detailed examination of the modern Bayesian approach to data analysis, with references to the Bayesian theoretical and applied literature. Suitable for...

  4. A Bayesian variable selection procedure for ranking overlapping gene sets

    DEFF Research Database (Denmark)

    Skarman, Axel; Mahdi Shariati, Mohammad; Janss, Luc;

    2012-01-01

    data to study how the variable selection method was affected by overlaps among the pathways. In addition, we compared our approach to another that ignores the overlaps, and studied the differences in the prioritization. The variable selection method was robust to a change in prior probability...... described. In many cases, these methods test one gene set at a time, and therefore do not consider overlaps among the pathways. Here, we present a Bayesian variable selection method to prioritize gene sets that overcomes this limitation by considering all gene sets simultaneously. We applied Bayesian...... variable selection to differential expression to prioritize the molecular and genetic pathways involved in the responses to Escherichia coli infection in Danish Holstein cows. Results We used a Bayesian variable selection method to prioritize Kyoto Encyclopedia of Genes and Genomes pathways. We used our...

  5. Prediction of road accidents: A Bayesian hierarchical approach

    DEFF Research Database (Denmark)

    Deublein, Markus; Schubert, Matthias; Adey, Bryan T.;

    2013-01-01

    -lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks...... in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models.Prior Bayesian Probabilistic Networks are first established by means of multivariate regression analysis...... of the observed frequencies of the model response variables, e.g. the occurrence of an accident, and observed values of the risk indicating variables, e.g. degree of road curvature. Subsequently, parameter learning is done using updating algorithms, to determine the posterior predictive probability distributions...

  6. Introducing two hyperparameters in Bayesian estimation of wave spectra

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2008-01-01

    ranges. From numerical simulations of stochastic response measurements, it is shown that the optimal hyperparameters, determined by use of ABIC (a Bayesian Information Criterion), correspond to the best estimate of the wave spectrum, which is not always the case when only one hyperparameter is included......An estimate of the on-site wave spectrum can be obtained from measured ship responses by use of Bayesian modelling, which means that the wave spectrum is found as the optimum solution from a probabilistic viewpoint. The paper describes the introduction of two hyperparameters into Bayesian modelling...... so that the prior information included in the modelling is based on two constraints: the wave spectrum must be smooth directional-wise as well as frequency-wise. Traditionally, only one hyperparameter has been used to control the amount of smoothing applied in both the frequency and directional...

  7. Nonlinear and non-Gaussian Bayesian based handwriting beautification

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2013-03-01

    A framework is proposed in this paper to effectively and efficiently beautify handwriting by means of a novel nonlinear and non-Gaussian Bayesian algorithm. In the proposed framework, format and size of handwriting image are firstly normalized, and then typeface in computer system is applied to optimize vision effect of handwriting. The Bayesian statistics is exploited to characterize the handwriting beautification process as a Bayesian dynamic model. The model parameters to translate, rotate and scale typeface in computer system are controlled by state equation, and the matching optimization between handwriting and transformed typeface is employed by measurement equation. Finally, the new typeface, which is transformed from the original one and gains the best nonlinear and non-Gaussian optimization, is the beautification result of handwriting. Experimental results demonstrate the proposed framework provides a creative handwriting beautification methodology to improve visual acceptance.

  8. The Bayesian Modelling Of Inflation Rate In Romania

    Directory of Open Access Journals (Sweden)

    Mihaela Simionescu (Bratu

    2014-06-01

    Full Text Available Bayesian econometrics knew a considerable increase in popularity in the last years, joining the interests of various groups of researchers in economic sciences and additional ones as specialists in econometrics, commerce, industry, marketing, finance, micro-economy, macro-economy and other domains. The purpose of this research is to achieve an introduction in Bayesian approach applied in economics, starting with Bayes theorem. For the Bayesian linear regression models the methodology of estimation was presented, realizing two empirical studies for data taken from the Romanian economy. Thus, an autoregressive model of order 2 and a multiple regression model were built for the index of consumer prices. The Gibbs sampling algorithm was used for estimation in R software, computing the posterior means and the standard deviations. The parameters’ stability proved to be greater than in the case of estimations based on the methods of classical Econometrics.

  9. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert

  10. Algorithms and Complexity Results for Exact Bayesian Structure Learning

    CERN Document Server

    Ordyniak, Sebastian

    2012-01-01

    Bayesian structure learning is the NP-hard problem of discovering a Bayesian network that optimally represents a given set of training data. In this paper we study the computational worst-case complexity of exact Bayesian structure learning under graph theoretic restrictions on the super-structure. The super-structure (a concept introduced by Perrier, Imoto, and Miyano, JMLR 2008) is an undirected graph that contains as subgraphs the skeletons of solution networks. Our results apply to several variants of score-based Bayesian structure learning where the score of a network decomposes into local scores of its nodes. Results: We show that exact Bayesian structure learning can be carried out in non-uniform polynomial time if the super-structure has bounded treewidth and in linear time if in addition the super-structure has bounded maximum degree. We complement this with a number of hardness results. We show that both restrictions (treewidth and degree) are essential and cannot be dropped without loosing uniform ...

  11. Historical Developments in Bayesian Econometrics after Cowles Foundation Monographs 10, 14

    OpenAIRE

    Basturk, Nalan; Cakmakli, Cem; Ceyhan, S. Pinar; Herman K. van Dijk

    2013-01-01

    After a brief description of the first Bayesian steps into econometrics in the 1960s and early 70s, publication and citation patterns are analyzed in ten major econometric journals until 2012. The results indicate that journals which contain both theoretical and applied papers, such as Journal of Econometrics, Journal of Business and Economic Statistics and Journal of Applied Econometrics, publish the large majority of high quality Bayesian econometric papers in contrast to theoretical journa...

  12. Adversarial life testing: A Bayesian negotiation model

    International Nuclear Information System (INIS)

    Life testing is a procedure intended for facilitating the process of making decisions in the context of industrial reliability. On the other hand, negotiation is a process of making joint decisions that has one of its main foundations in decision theory. A Bayesian sequential model of negotiation in the context of adversarial life testing is proposed. This model considers a general setting for which a manufacturer offers a product batch to a consumer. It is assumed that the reliability of the product is measured in terms of its lifetime. Furthermore, both the manufacturer and the consumer have to use their own information with respect to the quality of the product. Under these assumptions, two situations can be analyzed. For both of them, the main aim is to accept or reject the product batch based on the product reliability. This topic is related to a reliability demonstration problem. The procedure is applied to a class of distributions that belong to the exponential family. Thus, a unified framework addressing the main topics in the considered Bayesian model is presented. An illustrative example shows that the proposed technique can be easily applied in practice

  13. A Fuzzy Quantum Neural Network and Its Application in Pattern Recognition

    Institute of Scientific and Technical Information of China (English)

    MIAOFuyou; XIONGYan; CHENHuanhuan; WANGXingfu

    2005-01-01

    This paper proposes a fuzzy quantum neural network model combining quantum neural network and fuzzy logic, which applies the fuzzy logic to design the collapse rules of the quantum neural network, and solves the character recognition problem. Theoretical analysis and experimental results show that fuzzy quantum neural network improves recognizing veracity than the traditional neural network and quantum neural network.

  14. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  15. Bayesian networks inference algorithm to implement Dempster Shafer theory in reliability analysis

    International Nuclear Information System (INIS)

    This paper deals with the use of Bayesian networks to compute system reliability. The reliability analysis problem is described and the usual methods for quantitative reliability analysis are presented within a case study. Some drawbacks that justify the use of Bayesian networks are identified. The basic concepts of the Bayesian networks application to reliability analysis are introduced and a model to compute the reliability for the case study is presented. Dempster Shafer theory to treat epistemic uncertainty in reliability analysis is then discussed and its basic concepts that can be applied thanks to the Bayesian network inference algorithm are introduced. Finally, it is shown, with a numerical example, how Bayesian networks' inference algorithms compute complex system reliability and what the Dempster Shafer theory can provide to reliability analysis

  16. Multi-source Fuzzy Information Fusion Method Based on Bayesian Optimal Classifier%基于贝叶斯最优分类器的多源模糊信息融合方法

    Institute of Scientific and Technical Information of China (English)

    苏宏升

    2008-01-01

    To make conventional Bayesian optimal classifier possess the abilities of disposing fuzzy information and realizing the automation of reasoning process, a new Bayesian optimal classifier is proposed with fuzzy information embedded. It can not only dispose fuzzy information effectively, but also retain learning properties of Bayesian optimal classifier. In addition, according to the evolution of fuzzy set theory, vague set is also imbedded into it to generate vague Bayesian optimal classifier. It can simultaneously simulate the twofold characteristics of fuzzy information from the positive and reverse directions. Further, a set pair Bayesian optimal classifier is also proposed considering the threefold characteristics of fuzzy information from the positive, reverse, and indeterminate sides. In the end, a knowledge-based artificial neural network (KBANN) is presented to realize automatic reasoning of Bayesian optimal classifier. It not only reduces the computational cost of Bayesian optimal classifier but also improves its classification learning quality.

  17. Flood quantile estimation at ungauged sites by Bayesian networks

    Science.gov (United States)

    Mediero, L.; Santillán, D.; Garrote, L.

    2012-04-01

    Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a

  18. Neural Networks and Photometric Redshifts

    CERN Document Server

    Tagliaferri, R; Andreon, S; Capozziello, S; Donalek, C; Giordano, G; Tagliaferri, Roberto; Longo, Giuseppe; Andreon, Stefano; Capozziello, Salvatore; Donalek, Ciro; Giordano, Gerardo

    2002-01-01

    We present a neural network based approach to the determination of photometric redshift. The method was tested on the Sloan Digital Sky Survey Early Data Release (SDSS-EDR) reaching an accuracy comparable and, in some cases, better than SED template fitting techniques. Different neural networks architecture have been tested and the combination of a Multi Layer Perceptron with 1 hidden layer (22 neurons) operated in a Bayesian framework, with a Self Organizing Map used to estimate the accuracy of the results, turned out to be the most effective. In the best experiment, the implemented network reached an accuracy of 0.020 (interquartile error) in the range 0

  19. Bayesian inference tools for inverse problems

    Science.gov (United States)

    Mohammad-Djafari, Ali

    2013-08-01

    In this paper, first the basics of Bayesian inference with a parametric model of the data is presented. Then, the needed extensions are given when dealing with inverse problems and in particular the linear models such as Deconvolution or image reconstruction in Computed Tomography (CT). The main point to discuss then is the prior modeling of signals and images. A classification of these priors is presented, first in separable and Markovien models and then in simple or hierarchical with hidden variables. For practical applications, we need also to consider the estimation of the hyper parameters. Finally, we see that we have to infer simultaneously on the unknowns, the hidden variables and the hyper parameters. Very often, the expression of this joint posterior law is too complex to be handled directly. Indeed, rarely we can obtain analytical solutions to any point estimators such the Maximum A posteriori (MAP) or Posterior Mean (PM). Three main tools are then can be used: Laplace approximation (LAP), Markov Chain Monte Carlo (MCMC) and Bayesian Variational Approximations (BVA). To illustrate all these aspects, we will consider a deconvolution problem where we know that the input signal is sparse and propose to use a Student-t prior for that. Then, to handle the Bayesian computations with this model, we use the property of Student-t which is modelling it via an infinite mixture of Gaussians, introducing thus hidden variables which are the variances. Then, the expression of the joint posterior of the input signal samples, the hidden variables (which are here the inverse variances of those samples) and the hyper-parameters of the problem (for example the variance of the noise) is given. From this point, we will present the joint maximization by alternate optimization and the three possible approximation methods. Finally, the proposed methodology is applied in different applications such as mass spectrometry, spectrum estimation of quasi periodic biological signals and

  20. Probability biases as Bayesian inference

    Directory of Open Access Journals (Sweden)

    Andre; C. R. Martins

    2006-11-01

    Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.

  1. Bayesian Methods and Universal Darwinism

    CERN Document Server

    Campbell, John

    2010-01-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a 'copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that system...

  2. Bayesian Inference for Functional Dynamics Exploring in fMRI Data

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2016-01-01

    Full Text Available This paper aims to review state-of-the-art Bayesian-inference-based methods applied to functional magnetic resonance imaging (fMRI data. Particularly, we focus on one specific long-standing challenge in the computational modeling of fMRI datasets: how to effectively explore typical functional interactions from fMRI time series and the corresponding boundaries of temporal segments. Bayesian inference is a method of statistical inference which has been shown to be a powerful tool to encode dependence relationships among the variables with uncertainty. Here we provide an introduction to a group of Bayesian-inference-based methods for fMRI data analysis, which were designed to detect magnitude or functional connectivity change points and to infer their functional interaction patterns based on corresponding temporal boundaries. We also provide a comparison of three popular Bayesian models, that is, Bayesian Magnitude Change Point Model (BMCPM, Bayesian Connectivity Change Point Model (BCCPM, and Dynamic Bayesian Variable Partition Model (DBVPM, and give a summary of their applications. We envision that more delicate Bayesian inference models will be emerging and play increasingly important roles in modeling brain functions in the years to come.

  3. Bayesian test and Kuhn's paradigm

    Institute of Scientific and Technical Information of China (English)

    Chen Xiaoping

    2006-01-01

    Kuhn's theory of paradigm reveals a pattern of scientific progress,in which normal science alternates with scientific revolution.But Kuhn underrated too much the function of scientific test in his pattern,because he focuses all his attention on the hypothetico-deductive schema instead of Bayesian schema.This paper employs Bayesian schema to re-examine Kuhn's theory of paradigm,to uncover its logical and rational components,and to illustrate the tensional structure of logic and belief,rationality and irrationality,in the process of scientific revolution.

  4. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  5. Sequential estimation of neural models by Bayesian filtering

    OpenAIRE

    Closas Gómez, Pau

    2014-01-01

    Un dels reptes més difícils de la neurociència és el d'entendre la connectivitat del cervell. Aquest problema es pot tractar des de diverses perspectives, aquí ens centrem en els fenòmens locals que ocorren en una sola neurona. L'objectiu final és, doncs, entendre la dinàmica de les neurones i com la interconnexió amb altres neurones afecta al seu estat. Les observacions de traces del potencial de membrana constitueixen la principal font d'informació per a derivar models matemàtics d'una neur...

  6. Bayesian Model Averaging for Propensity Score Analysis

    Science.gov (United States)

    Kaplan, David; Chen, Jianshen

    2013-01-01

    The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…

  7. Bayesian networks and food security - An introduction

    NARCIS (Netherlands)

    Stein, A.

    2004-01-01

    This paper gives an introduction to Bayesian networks. Networks are defined and put into a Bayesian context. Directed acyclical graphs play a crucial role here. Two simple examples from food security are addressed. Possible uses of Bayesian networks for implementation and further use in decision sup

  8. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    C. Dimitrakakis

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more st

  9. What are artificial neural networks?

    DEFF Research Database (Denmark)

    Krogh, Anders

    2008-01-01

    Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb......Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction. How do they work and what might they be good for? Udgivelsesdato: 2008-Feb...

  10. Macroscopic hotspots identification: A Bayesian spatio-temporal interaction approach.

    Science.gov (United States)

    Dong, Ni; Huang, Helai; Lee, Jaeyoung; Gao, Mingyun; Abdel-Aty, Mohamed

    2016-07-01

    This study proposes a Bayesian spatio-temporal interaction approach for hotspot identification by applying the full Bayesian (FB) technique in the context of macroscopic safety analysis. Compared with the emerging Bayesian spatial and temporal approach, the Bayesian spatio-temporal interaction model contributes to a detailed understanding of differential trends through analyzing and mapping probabilities of area-specific crash trends as differing from the mean trend and highlights specific locations where crash occurrence is deteriorating or improving over time. With traffic analysis zones (TAZs) crash data collected in Florida, an empirical analysis was conducted to evaluate the following three approaches for hotspot identification: FB ranking using a Poisson-lognormal (PLN) model, FB ranking using a Bayesian spatial and temporal (B-ST) model and FB ranking using a Bayesian spatio-temporal interaction (B-ST-I) model. The results show that (a) the models accounting for space-time effects perform better in safety ranking than does the PLN model, and (b) the FB approach using the B-ST-I model significantly outperforms the B-ST approach in correctly identifying hotspots by explicitly accounting for the space-time variation in addition to the stable spatial/temporal patterns of crash occurrence. In practice, the B-ST-I approach plays key roles in addressing two issues: (a) how the identified hotspots have evolved over time and (b) the identification of areas that, whilst not yet hotspots, show a tendency to become hotspots. Finally, it can provide guidance to policy decision makers to efficiently improve zonal-level safety. PMID:27110645

  11. A Bayesian approach to earthquake source studies

    Science.gov (United States)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also

  12. Bayesian biclustering of gene expression data

    Directory of Open Access Journals (Sweden)

    Liu Jun S

    2008-03-01

    Full Text Available Abstract Background Biclustering of gene expression data searches for local patterns of gene expression. A bicluster (or a two-way cluster is defined as a set of genes whose expression profiles are mutually similar within a subset of experimental conditions/samples. Although several biclustering algorithms have been studied, few are based on rigorous statistical models. Results We developed a Bayesian biclustering model (BBC, and implemented a Gibbs sampling procedure for its statistical inference. We showed that Bayesian biclustering model can correctly identify multiple clusters of gene expression data. Using simulated data both from the model and with realistic characters, we demonstrated the BBC algorithm outperforms other methods in both robustness and accuracy. We also showed that the model is stable for two normalization methods, the interquartile range normalization and the smallest quartile range normalization. Applying the BBC algorithm to the yeast expression data, we observed that majority of the biclusters we found are supported by significant biological evidences, such as enrichments of gene functions and transcription factor binding sites in the corresponding promoter sequences. Conclusions The BBC algorithm is shown to be a robust model-based biclustering method that can discover biologically significant gene-condition clusters in microarray data. The BBC model can easily handle missing data via Monte Carlo imputation and has the potential to be extended to integrated study of gene transcription networks.

  13. Refining gene signatures: a Bayesian approach

    Directory of Open Access Journals (Sweden)

    Labbe Aurélie

    2009-12-01

    Full Text Available Abstract Background In high density arrays, the identification of relevant genes for disease classification is complicated by not only the curse of dimensionality but also the highly correlated nature of the array data. In this paper, we are interested in the question of how many and which genes should be selected for a disease class prediction. Our work consists of a Bayesian supervised statistical learning approach to refine gene signatures with a regularization which penalizes for the correlation between the variables selected. Results Our simulation results show that we can most often recover the correct subset of genes that predict the class as compared to other methods, even when accuracy and subset size remain the same. On real microarray datasets, we show that our approach can refine gene signatures to obtain either the same or better predictive performance than other existing methods with a smaller number of genes. Conclusions Our novel Bayesian approach includes a prior which penalizes highly correlated features in model selection and is able to extract key genes in the highly correlated context of microarray data. The methodology in the paper is described in the context of microarray data, but can be applied to any array data (such as micro RNA, for example as a first step towards predictive modeling of cancer pathways. A user-friendly software implementation of the method is available.

  14. Water Turbidity Modelling During Water Treatment Processes Using Artificial Neural Networks

    OpenAIRE

    Rak, Adam

    2013-01-01

    Artificial neural networks are increasingly being used in the research and analysis of unit and technical processes related to water treatment. An artificial neural network model was created to predict the turbidity of treated water in a newly operating water treatment system for surface and retention water at the Sosnówka reservoir, Poland. To model water turbidity during the water treatment process for a selected system, a flexible Bayesian model of neural networks, Gaussian processes a...

  15. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis Linda

    2006-01-01

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for the salt and pepper noise. The inference in the model is discussed...

  16. Bayesian image restoration, using configurations

    DEFF Research Database (Denmark)

    Thorarinsdottir, Thordis

    configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in...

  17. Bayesian Agglomerative Clustering with Coalescents

    OpenAIRE

    Teh, Yee Whye; Daumé III, Hal; Roy, Daniel

    2009-01-01

    We introduce a new Bayesian model for hierarchical clustering based on a prior over trees called Kingman's coalescent. We develop novel greedy and sequential Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We show experimentally the superiority of our algorithms over others, and demonstrate our approach in document clustering and phylolinguistics.

  18. Bayesian Networks and Influence Diagrams

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders Læsø

    Bayesian Networks and Influence Diagrams: A Guide to Construction and Analysis, Second Edition, provides a comprehensive guide for practitioners who wish to understand, construct, and analyze intelligent systems for decision support based on probabilistic networks. This new edition contains six new...

  19. Bayesian Analysis Made Simple An Excel GUI for WinBUGS

    CERN Document Server

    Woodward, Philip

    2011-01-01

    From simple NLMs to complex GLMMs, this book describes how to use the GUI for WinBUGS - BugsXLA - an Excel add-in written by the author that allows a range of Bayesian models to be easily specified. With case studies throughout, the text shows how to routinely apply even the more complex aspects of model specification, such as GLMMs, outlier robust models, random effects Emax models, auto-regressive errors, and Bayesian variable selection. It provides brief, up-to-date discussions of current issues in the practical application of Bayesian methods. The author also explains how to obtain free so

  20. ESTIMATE OF THE HYPSOMETRIC RELATIONSHIP WITH NONLINEAR MODELS FITTED BY EMPIRICAL BAYESIAN METHODS

    Directory of Open Access Journals (Sweden)

    Monica Fabiana Bento Moreira

    2015-09-01

    Full Text Available In this paper we propose a Bayesian approach to solve the inference problem with restriction on parameters, regarding to nonlinear models used to represent the hypsometric relationship in clones of Eucalyptus sp. The Bayesian estimates are calculated using Monte Carlo Markov Chain (MCMC method. The proposed method was applied to different groups of actual data from which two were selected to show the results. These results were compared to the results achieved by the minimum square method, highlighting the superiority of the Bayesian approach, since this approach always generate the biologically consistent results for hipsometric relationship.

  1. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  2. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing...... grid nodes and grid-node artifacts and the method accommodates a wide range of grid distortions including: large-scale warping, varying row/column spacing, as well as nonrigid random fluctuations of the grid nodes. The methodology is demonstrated in two case studies concerning (1) localization of DNA...

  3. Bayesian analysis of rare events

    Science.gov (United States)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  4. Bayesian methods for measures of agreement

    CERN Document Server

    Broemeling, Lyle D

    2009-01-01

    Using WinBUGS to implement Bayesian inferences of estimation and testing hypotheses, Bayesian Methods for Measures of Agreement presents useful methods for the design and analysis of agreement studies. It focuses on agreement among the various players in the diagnostic process.The author employs a Bayesian approach to provide statistical inferences based on various models of intra- and interrater agreement. He presents many examples that illustrate the Bayesian mode of reasoning and explains elements of a Bayesian application, including prior information, experimental information, the likelihood function, posterior distribution, and predictive distribution. The appendices provide the necessary theoretical foundation to understand Bayesian methods as well as introduce the fundamentals of programming and executing the WinBUGS software.Taking a Bayesian approach to inference, this hands-on book explores numerous measures of agreement, including the Kappa coefficient, the G coefficient, and intraclass correlation...

  5. Stochastic model updating utilizing Bayesian approach and Gaussian process model

    Science.gov (United States)

    Wan, Hua-Ping; Ren, Wei-Xin

    2016-03-01

    Stochastic model updating (SMU) has been increasingly applied in quantifying structural parameter uncertainty from responses variability. SMU for parameter uncertainty quantification refers to the problem of inverse uncertainty quantification (IUQ), which is a nontrivial task. Inverse problem solved with optimization usually brings about the issues of gradient computation, ill-conditionedness, and non-uniqueness. Moreover, the uncertainty present in response makes the inverse problem more complicated. In this study, Bayesian approach is adopted in SMU for parameter uncertainty quantification. The prominent strength of Bayesian approach for IUQ problem is that it solves IUQ problem in a straightforward manner, which enables it to avoid the previous issues. However, when applied to engineering structures that are modeled with a high-resolution finite element model (FEM), Bayesian approach is still computationally expensive since the commonly used Markov chain Monte Carlo (MCMC) method for Bayesian inference requires a large number of model runs to guarantee the convergence. Herein we reduce computational cost in two aspects. On the one hand, the fast-running Gaussian process model (GPM) is utilized to approximate the time-consuming high-resolution FEM. On the other hand, the advanced MCMC method using delayed rejection adaptive Metropolis (DRAM) algorithm that incorporates local adaptive strategy with global adaptive strategy is employed for Bayesian inference. In addition, we propose the use of the powerful variance-based global sensitivity analysis (GSA) in parameter selection to exclude non-influential parameters from calibration parameters, which yields a reduced-order model and thus further alleviates the computational burden. A simulated aluminum plate and a real-world complex cable-stayed pedestrian bridge are presented to illustrate the proposed framework and verify its feasibility.

  6. Bayesian inference of BWR model parameters by Markov chain Monte Carlo

    International Nuclear Information System (INIS)

    In this paper, the Markov chain Monte Carlo approach to Bayesian inference is applied for estimating the parameters of a reduced-order model of the dynamics of a boiling water reactor system. A Bayesian updating strategy is devised to progressively refine the estimates, as newly measured data become available. Finally, the technique is used for detecting parameter changes during the system lifetime, e.g. due to component degradation

  7. AutoClass@IJM: a powerful tool for Bayesian classification of heterogeneous data in biology

    OpenAIRE

    Achcar, F.; Camadro, J.-M.; Mestivier, D.

    2009-01-01

    Recently, several theoretical and applied studies have shown that unsupervised Bayesian classification systems are of particular relevance for biological studies. However, these systems have not yet fully reached the biological community mainly because there are few freely available dedicated computer programs, and Bayesian clustering algorithms are known to be time consuming, which limits their usefulness when using personal computers. To overcome these limitations, we developed AutoClass@IJ...

  8. Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods

    Science.gov (United States)

    Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari

    2013-06-01

    A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.

  9. Bayesian mixture models for Poisson astronomical images

    CERN Document Server

    Guglielmetti, Fabrizia; Dose, Volker

    2012-01-01

    Astronomical images in the Poisson regime are typically characterized by a spatially varying cosmic background, large variety of source morphologies and intensities, data incompleteness, steep gradients in the data, and few photon counts per pixel. The Background-Source separation technique is developed with the aim to detect faint and extended sources in astronomical images characterized by Poisson statistics. The technique employs Bayesian mixture models to reliably detect the background as well as the sources with their respective uncertainties. Background estimation and source detection is achieved in a single algorithm. A large variety of source morphologies is revealed. The technique is applied in the X-ray part of the electromagnetic spectrum on ROSAT and Chandra data sets and it is under a feasibility study for the forthcoming eROSITA mission.

  10. Applying dynamic Bayesian networks in transliteration detection and generation

    NARCIS (Netherlands)

    Nabende, Peter

    2011-01-01

    Peter Nabende promoveert op methoden die programma’s voor automatisch vertalen kunnen verbeteren. Hij onderzocht twee systemen voor het genereren en vergelijken van transcripties: een DBN-model (Dynamische Bayesiaanse Netwerken) waarin Pair Hidden Markovmodellen zijn geïmplementeerd en een DBN-model

  11. Applied Bayesian statistical studies in biology and medicine

    CERN Document Server

    D’Amore, G; Scalfari, F

    2004-01-01

    It was written on another occasion· that "It is apparent that the scientific culture, if one means production of scientific papers, is growing exponentially, and chaotically, in almost every field of investigation". The biomedical sciences sensu lato and mathematical statistics are no exceptions. One might say then, and with good reason, that another collection of bio­ statistical papers would only add to the overflow and cause even more confusion. Nevertheless, this book may be greeted with some interest if we state that most of the papers in it are the result of a collaboration between biologists and statisticians, and partly the product of the Summer School th "Statistical Inference in Human Biology" which reaches its 10 edition in 2003 (information about the School can be obtained at the Web site http://www2. stat. unibo. itleventilSito%20scuolalindex. htm). is common experience - and not only This is rather important. Indeed, it in Italy - that encounters between statisticians and researchers are spora...

  12. A hierarchical Bayesian framework for nonlinearities identification in gravitational wave detector outputs

    International Nuclear Information System (INIS)

    In this paper, a hierarchical Bayesian learning scheme for autoregressive neural network models is shown which overcomes the problem of identifying the separate linear and nonlinear parts modelled by the network. We show how the identification can be carried out by defining suitable priors on the parameter space which help the learning algorithms to avoid undesired parameter configurations. Some applications to synthetic and real world experimental data are shown to validate the proposed methodology

  13. A hierarchical Bayesian framework for nonlinearities identification in gravitational wave detector outputs

    Energy Technology Data Exchange (ETDEWEB)

    Acernese, F [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Barone, F [Dipartimento di Scienze Farmaceutiche, Universita di Salerno, Fisciano, SA (Italy); De Rosa, R [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Eleuteri, A [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Milano, L [Dipartimento di Scienze Fisiche, Universita di Napoli ' Federico II' , Naples (Italy); INFN, sez. Napoli, Naples (Italy); Tagliaferri, R [Dipartimento di Matematica ed Informatica, Universita di Salerno, Baronissi, SA (Italy)

    2005-09-21

    In this paper, a hierarchical Bayesian learning scheme for autoregressive neural network models is shown which overcomes the problem of identifying the separate linear and nonlinear parts modelled by the network. We show how the identification can be carried out by defining suitable priors on the parameter space which help the learning algorithms to avoid undesired parameter configurations. Some applications to synthetic and real world experimental data are shown to validate the proposed methodology.

  14. Neural Networks

    International Nuclear Information System (INIS)

    Physicists use large detectors to measure particles created in high-energy collisions at particle accelerators. These detectors typically produce signals indicating either where ionization occurs along the path of the particle, or where energy is deposited by the particle. The data produced by these signals is fed into pattern recognition programs to try to identify what particles were produced, and to measure the energy and direction of these particles. Ideally, there are many techniques used in this pattern recognition software. One technique, neural networks, is particularly suitable for identifying what type of particle caused by a set of energy deposits. Neural networks can derive meaning from complicated or imprecise data, extract patterns, and detect trends that are too complex to be noticed by either humans or other computer related processes. To assist in the advancement of this technology, Physicists use a tool kit to experiment with several neural network techniques. The goal of this research is interface a neural network tool kit into Java Analysis Studio (JAS3), an application that allows data to be analyzed from any experiment. As the final result, a physicist will have the ability to train, test, and implement a neural network with the desired output while using JAS3 to analyze the results or output. Before an implementation of a neural network can take place, a firm understanding of what a neural network is and how it works is beneficial. A neural network is an artificial representation of the human brain that tries to simulate the learning process [5]. It is also important to think of the word artificial in that definition as computer programs that use calculations during the learning process. In short, a neural network learns by representative examples. Perhaps the easiest way to describe the way neural networks learn is to explain how the human brain functions. The human brain contains billions of neural cells that are responsible for processing

  15. Neural fields theory and applications

    CERN Document Server

    Graben, Peter; Potthast, Roland; Wright, James

    2014-01-01

    With this book, the editors present the first comprehensive collection in neural field studies, authored by leading scientists in the field - among them are two of the founding-fathers of neural field theory. Up to now, research results in the field have been disseminated across a number of distinct journals from mathematics, computational neuroscience, biophysics, cognitive science and others. Starting with a tutorial for novices in neural field studies, the book comprises chapters on emergent patterns, their phase transitions and evolution, on stochastic approaches, cortical development, cognition, robotics and computation, large-scale numerical simulations, the coupling of neural fields to the electroencephalogram and phase transitions in anesthesia. The intended readership are students and scientists in applied mathematics, theoretical physics, theoretical biology, and computational neuroscience. Neural field theory and its applications have a long-standing tradition in the mathematical and computational ...

  16. Selection of input parameters to model direct solar irradiance by using artificial neural networks

    International Nuclear Information System (INIS)

    A very important factor in the assessment of solar energy resources is the availability of direct irradiance data of high quality. However, this component of solar radiation is seldom measured and thus must be estimated from data of global solar irradiance, which is registered in most radiometric stations. In recent years, artificial neural networks (ANN) have shown to be a powerful tool for mapping complex and non-linear relationships. In this work, the Bayesian framework for ANN, named as automatic relevance determination method (ARD), was employed to obtain the relative relevance of a large set of atmospheric and radiometric variables used for estimating hourly direct solar irradiance. In addition, we analysed the viability of this novel technique applied to select the optimum input parameters to the neural network. For that, a multi-layer feedforward perceptron is trained on these data. The results reflect the relative importance of the inputs selected. Clearness index and relative air mass were found to be the more relevant input variables to the neural network, as it was expected, proving the reliability of the ARD method. Moreover, we show that this novel methodology can be used in unfavourable conditions, in terms of limited amount of available data, performing successful results

  17. Diffusion Estimation Of State-Space Models: Bayesian Formulation

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil

    Reims: IEEE, 2014. ISBN 978-1-4799-3693-9. [The 24th IEEE International Workshop on Machine Learning for Signal Processing (MLSP2014). Reims (FR), 21.09.2014-24.09.2014] R&D Projects: GA ČR(CZ) GP14-06678P Keywords : distributed estimation * state-space models * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/dedecius-0431804.pdf

  18. Performance and prediction: Bayesian modelling of fallible choice in chess

    OpenAIRE

    Haworth, Guy McCrossan; Regan, Ken; Di Fatta, Giuseppe

    2010-01-01

    Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration ...

  19. Bayesian modeling and prediction of solar particles flux

    International Nuclear Information System (INIS)

    An autoregression model was developed based on the Bayesian approach. Considering the solar wind non-homogeneity, the idea was applied of combining the pure autoregressive properties of the model with expert knowledge based on a similar behaviour of the various phenomena related to the flux properties. Examples of such situations include the hardening of the X-ray spectrum, which is often followed by coronal mass ejection and a significant increase in the particles flux intensity

  20. Bayesian modeling and prediction of solar particles flux

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Kalová, J.

    18/56/, 7/8 (2010), s. 228-230. ISSN 1210-7085 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : mathematical models * solar activity * solar flares * solar flux * solar particles Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2010/AS/dedecius-bayesian modeling and prediction of solar particles flux.pdf

  1. Learning genetic epistasis using Bayesian network scoring criteria

    OpenAIRE

    Barmada M Michael; Neapolitan Richard E; Jiang Xia; Visweswaran Shyam

    2011-01-01

    Abstract Background Gene-gene epistatic interactions likely play an important role in the genetic basis of many common diseases. Recently, machine-learning and data mining methods have been developed for learning epistatic relationships from data. A well-known combinatorial method that has been successfully applied for detecting epistasis is Multifactor Dimensionality Reduction (MDR). Jiang et al. created a combinatorial epistasis learning method called BNMBL to learn Bayesian network (BN) ep...

  2. Bayesian Models of Learning and Reasoning with Relations

    OpenAIRE

    Chen, Dawn

    2014-01-01

    How do humans acquire relational concepts such as larger, which are essential for analogical inference and other forms of high-level reasoning? Are they necessarily innate, or can they be learned from non-relational inputs? Using comparative relations as a model domain, we show that structured relations can be learned from unstructured inputs of realistic complexity, applying bottom-up Bayesian learning mechanisms that make minimal assumptions about innate representations. First, we introduce...

  3. A Bayesian framework for knowledge attribution: Evidence from semantic integration

    OpenAIRE

    Powell, D; Horne, Z; Pinillos, NÁ; Holyoak, KJ

    2015-01-01

    © 2015 Elsevier B.V. We propose a Bayesian framework for the attribution of knowledge, and apply this framework to generate novel predictions about knowledge attribution for different types of "Gettier cases", in which an agent is led to a justified true belief yet has made erroneous assumptions. We tested these predictions using a paradigm based on semantic integration. We coded the frequencies with which participants falsely recalled the word "thought" as "knew" (or a near synonym), yieldin...

  4. Neural network design with combined backpropagation and creeping random search learning algorithms applied to the determination of retained austenite in TRIP steels; Diseno de redes neuronales con aprendizaje combinado de retropropagacion y busqueda aleatoria progresiva aplicado a la determinacion de austenita retenida en aceros TRIP

    Energy Technology Data Exchange (ETDEWEB)

    Toda-Caraballo, I.; Garcia-Mateo, C.; Capdevila, C.

    2010-07-01

    At the beginning of the decade of the nineties, the industrial interest for TRIP steels leads to a significant increase of the investigation and application in this field. In this work, the flexibility of neural networks for the modelling of complex properties is used to tackle the problem of determining the retained austenite content in TRIP-steel. Applying a combination of two learning algorithms (backpropagation and creeping-random-search) for the neural network, a model has been created that enables the prediction of retained austenite in low-Si / low-Al multiphase steels as a function of processing parameters. (Author). 34 refs.

  5. Perceptual decision making: Drift-diffusion model is equivalent to a Bayesian model

    Directory of Open Access Journals (Sweden)

    Sebastian Bitzer

    2014-02-01

    Full Text Available Behavioural data obtained with perceptual decision making experiments are typically analysed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence towards a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses.

  6. Bayesian approach to rough set

    CERN Document Server

    Marwala, Tshilidzi

    2007-01-01

    This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.

  7. Attention in a bayesian framework

    DEFF Research Database (Denmark)

    Whiteley, Louise Emma; Sahani, Maneesh

    2012-01-01

    include both selective phenomena, where attention is invoked by cues that point to particular stimuli, and integrative phenomena, where attention is invoked dynamically by endogenous processing. However, most previous Bayesian accounts of attention have focused on describing relatively simple experimental...... settings, where cues shape expectations about a small number of upcoming stimuli and thus convey "prior" information about clearly defined objects. While operationally consistent with the experiments it seeks to describe, this view of attention as prior seems to miss many essential elements of both its......The behavioral phenomena of sensory attention are thought to reflect the allocation of a limited processing resource, but there is little consensus on the nature of the resource or why it should be limited. Here we argue that a fundamental bottleneck emerges naturally within Bayesian models of...

  8. Bayesian Sampling using Condition Indicators

    DEFF Research Database (Denmark)

    Faber, Michael H.; Sørensen, John Dalsgaard

    2002-01-01

    allows for a Bayesian formulation of the indicators whereby the experience and expertise of the inspection personnel may be fully utilized and consistently updated as frequentistic information is collected. The approach is illustrated on an example considering a concrete structure subject to corrosion......The problem of control quality of components is considered for the special case where the acceptable failure rate is low, the test costs are high and where it may be difficult or impossible to test the condition of interest directly. Based on the classical control theory and the concept of...... condition indicators introduced by Benjamin and Cornell (1970) a Bayesian approach to quality control is formulated. The formulation is then extended to the case where the quality control is based on sampling of indirect information about the condition of the components, i.e. condition indicators. This...

  9. BAYESIAN IMAGE RESTORATION, USING CONFIGURATIONS

    Directory of Open Access Journals (Sweden)

    Thordis Linda Thorarinsdottir

    2011-05-01

    Full Text Available In this paper, we develop a Bayesian procedure for removing noise from images that can be viewed as noisy realisations of random sets in the plane. The procedure utilises recent advances in configuration theory for noise free random sets, where the probabilities of observing the different boundary configurations are expressed in terms of the mean normal measure of the random set. These probabilities are used as prior probabilities in a Bayesian image restoration approach. Estimation of the remaining parameters in the model is outlined for salt and pepper noise. The inference in the model is discussed in detail for 3 X 3 and 5 X 5 configurations and examples of the performance of the procedure are given.

  10. Bayesian Seismology of the Sun

    CERN Document Server

    Gruberbauer, Michael

    2013-01-01

    We perform a Bayesian grid-based analysis of the solar l=0,1,2 and 3 p modes obtained via BiSON in order to deliver the first Bayesian asteroseismic analysis of the solar composition problem. We do not find decisive evidence to prefer either of the contending chemical compositions, although the revised solar abundances (AGSS09) are more probable in general. We do find indications for systematic problems in standard stellar evolution models, unrelated to the consequences of inadequate modelling of the outer layers on the higher-order modes. The seismic observables are best fit by solar models that are several hundred million years older than the meteoritic age of the Sun. Similarly, meteoritic age calibrated models do not adequately reproduce the observed seismic observables. Our results suggest that these problems will affect any asteroseismic inference that relies on a calibration to the Sun.

  11. Bayesian priors for transiting planets

    CERN Document Server

    Kipping, David M

    2016-01-01

    As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...

  12. Bayesian Inference for Radio Observations

    CERN Document Server

    Lochner, Michelle; Zwart, Jonathan T L; Smirnov, Oleg; Bassett, Bruce A; Oozeer, Nadeem; Kunz, Martin

    2015-01-01

    (Abridged) New telescopes like the Square Kilometre Array (SKA) will push into a new sensitivity regime and expose systematics, such as direction-dependent effects, that could previously be ignored. Current methods for handling such systematics rely on alternating best estimates of instrumental calibration and models of the underlying sky, which can lead to inaccurate uncertainty estimates and biased results because such methods ignore any correlations between parameters. These deconvolution algorithms produce a single image that is assumed to be a true representation of the sky, when in fact it is just one realisation of an infinite ensemble of images compatible with the noise in the data. In contrast, here we report a Bayesian formalism that simultaneously infers both systematics and science. Our technique, Bayesian Inference for Radio Observations (BIRO), determines all parameters directly from the raw data, bypassing image-making entirely, by sampling from the joint posterior probability distribution. Thi...

  13. Gas metal arc welding of butt joint with varying gap width based on neural networks

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2005-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters, has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...... squares has been used with the back-propagation algorithm for training the network, while a Bayesian regularization technique has been successfully applied for minimizing the risk of inexpedient over-training. Finally, a predictive closed-loop control strategy based on a so-called single-neuron self...

  14. A Novel Method for Nonlinear Time Series Forecasting of Time-Delay Neural Network

    Institute of Scientific and Technical Information of China (English)

    JIANG Weijin; XU Yuhui

    2006-01-01

    Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization.Furthermore, the model is applied to forecast the import and export trades in one industry.The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business.Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system.While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably catch' the dynamic characteristic of the nonlinear system which produced the origin serial.

  15. Bayesian segmentation of hyperspectral images

    CERN Document Server

    Mohammadpour, Adel; Mohammad-Djafari, Ali

    2007-01-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  16. Bayesian segmentation of hyperspectral images

    Science.gov (United States)

    Mohammadpour, Adel; Féron, Olivier; Mohammad-Djafari, Ali

    2004-11-01

    In this paper we consider the problem of joint segmentation of hyperspectral images in the Bayesian framework. The proposed approach is based on a Hidden Markov Modeling (HMM) of the images with common segmentation, or equivalently with common hidden classification label variables which is modeled by a Potts Markov Random Field. We introduce an appropriate Markov Chain Monte Carlo (MCMC) algorithm to implement the method and show some simulation results.

  17. Bayesian Stable Isotope Mixing Models

    OpenAIRE

    Parnell, Andrew C.; Phillips, Donald L.; Bearhop, Stuart; Semmens, Brice X.; Ward, Eric J.; Moore, Jonathan W.; Andrew L Jackson; Inger, Richard

    2012-01-01

    In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixture. The most widely used application is quantifying the diet of organisms based on the food sources they have been observed to consume. At the centre of the multivariate statistical model we propose is a compositional m...

  18. Bayesian Network--Response Regression

    OpenAIRE

    WANG, LU; Durante, Daniele; Dunson, David B.

    2016-01-01

    There is an increasing interest in learning how human brain networks vary with continuous traits (e.g., personality, cognitive abilities, neurological disorders), but flexible procedures to accomplish this goal are limited. We develop a Bayesian semiparametric model, which combines low-rank factorizations and Gaussian process priors to allow flexible shifts of the conditional expectation for a network-valued random variable across the feature space, while including subject-specific random eff...

  19. Bayesian estimation of turbulent motion

    OpenAIRE

    Héas, P.; Herzet, C.; Mémin, E.; Heitz, D.; P. D. Mininni

    2013-01-01

    International audience Based on physical laws describing the multi-scale structure of turbulent flows, this article proposes a regularizer for fluid motion estimation from an image sequence. Regularization is achieved by imposing some scale invariance property between histograms of motion increments computed at different scales. By reformulating this problem from a Bayesian perspective, an algorithm is proposed to jointly estimate motion, regularization hyper-parameters, and to select the ...

  20. Skill Rating by Bayesian Inference

    OpenAIRE

    Di Fatta, Giuseppe; Haworth, Guy McCrossan; Regan, Kenneth W.

    2009-01-01

    Systems Engineering often involves computer modelling the behaviour of proposed systems and their components. Where a component is human, fallibility must be modelled by a stochastic agent. The identification of a model of decision-making over quantifiable options is investigated using the game-domain of Chess. Bayesian methods are used to infer the distribution of players’ skill levels from the moves they play rather than from their competitive results. The approach is used on large sets of ...

  1. Cover Tree Bayesian Reinforcement Learning

    OpenAIRE

    Tziortziotis, Nikolaos; Dimitrakakis, Christos; Blekas, Konstantinos

    2013-01-01

    This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration po...

  2. Bayesian kinematic earthquake source models

    Science.gov (United States)

    Minson, S. E.; Simons, M.; Beck, J. L.; Genrich, J. F.; Galetzka, J. E.; Chowdhury, F.; Owen, S. E.; Webb, F.; Comte, D.; Glass, B.; Leiva, C.; Ortega, F. H.

    2009-12-01

    Most coseismic, postseismic, and interseismic slip models are based on highly regularized optimizations which yield one solution which satisfies the data given a particular set of regularizing constraints. This regularization hampers our ability to answer basic questions such as whether seismic and aseismic slip overlap or instead rupture separate portions of the fault zone. We present a Bayesian methodology for generating kinematic earthquake source models with a focus on large subduction zone earthquakes. Unlike classical optimization approaches, Bayesian techniques sample the ensemble of all acceptable models presented as an a posteriori probability density function (PDF), and thus we can explore the entire solution space to determine, for example, which model parameters are well determined and which are not, or what is the likelihood that two slip distributions overlap in space. Bayesian sampling also has the advantage that all a priori knowledge of the source process can be used to mold the a posteriori ensemble of models. Although very powerful, Bayesian methods have up to now been of limited use in geophysical modeling because they are only computationally feasible for problems with a small number of free parameters due to what is called the "curse of dimensionality." However, our methodology can successfully sample solution spaces of many hundreds of parameters, which is sufficient to produce finite fault kinematic earthquake models. Our algorithm is a modification of the tempered Markov chain Monte Carlo (tempered MCMC or TMCMC) method. In our algorithm, we sample a "tempered" a posteriori PDF using many MCMC simulations running in parallel and evolutionary computation in which models which fit the data poorly are preferentially eliminated in favor of models which better predict the data. We present results for both synthetic test problems as well as for the 2007 Mw 7.8 Tocopilla, Chile earthquake, the latter of which is constrained by InSAR, local high

  3. Bayesian Kernel Mixtures for Counts

    OpenAIRE

    Canale, Antonio; David B Dunson

    2011-01-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviatio...

  4. Inference in hybrid Bayesian networks

    DEFF Research Database (Denmark)

    Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael;

    2009-01-01

    and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....

  5. Quantile pyramids for Bayesian nonparametrics

    OpenAIRE

    2009-01-01

    P\\'{o}lya trees fix partitions and use random probabilities in order to construct random probability measures. With quantile pyramids we instead fix probabilities and use random partitions. For nonparametric Bayesian inference we use a prior which supports piecewise linear quantile functions, based on the need to work with a finite set of partitions, yet we show that the limiting version of the prior exists. We also discuss and investigate an alternative model based on the so-called substitut...

  6. Space Shuttle RTOS Bayesian Network

    Science.gov (United States)

    Morris, A. Terry; Beling, Peter A.

    2001-01-01

    With shrinking budgets and the requirements to increase reliability and operational life of the existing orbiter fleet, NASA has proposed various upgrades for the Space Shuttle that are consistent with national space policy. The cockpit avionics upgrade (CAU), a high priority item, has been selected as the next major upgrade. The primary functions of cockpit avionics include flight control, guidance and navigation, communication, and orbiter landing support. Secondary functions include the provision of operational services for non-avionics systems such as data handling for the payloads and caution and warning alerts to the crew. Recently, a process to selection the optimal commercial-off-the-shelf (COTS) real-time operating system (RTOS) for the CAU was conducted by United Space Alliance (USA) Corporation, which is a joint venture between Boeing and Lockheed Martin, the prime contractor for space shuttle operations. In order to independently assess the RTOS selection, NASA has used the Bayesian network-based scoring methodology described in this paper. Our two-stage methodology addresses the issue of RTOS acceptability by incorporating functional, performance and non-functional software measures related to reliability, interoperability, certifiability, efficiency, correctness, business, legal, product history, cost and life cycle. The first stage of the methodology involves obtaining scores for the various measures using a Bayesian network. The Bayesian network incorporates the causal relationships between the various and often competing measures of interest while also assisting the inherently complex decision analysis process with its ability to reason under uncertainty. The structure and selection of prior probabilities for the network is extracted from experts in the field of real-time operating systems. Scores for the various measures are computed using Bayesian probability. In the second stage, multi-criteria trade-off analyses are performed between the scores

  7. Bayesian analysis of contingency tables

    OpenAIRE

    Gómez Villegas, Miguel A.; González Pérez, Beatriz

    2005-01-01

    The display of the data by means of contingency tables is used in different approaches to statistical inference, for example, to broach the test of homogeneity of independent multinomial distributions. We develop a Bayesian procedure to test simple null hypotheses versus bilateral alternatives in contingency tables. Given independent samples of two binomial distributions and taking a mixed prior distribution, we calculate the posterior probability that the proportion of successes in the first...

  8. Bayesian Credit Ratings (new version)

    OpenAIRE

    Paola Cerchiello; Paolo Giudici

    2013-01-01

    In this contribution we aim at improving ordinal variable selection in the context of causal models. In this regard, we propose an approach that provides a formal inferential tool to compare the explanatory power of each covariate, and, therefore, to select an effective model for classification purposes. Our proposed model is Bayesian nonparametric, and, thus, keeps the amount of model specification to a minimum. We consider the case in which information from the covariates is at the ordinal ...

  9. Bayesian second law of thermodynamics

    Science.gov (United States)

    Bartolotta, Anthony; Carroll, Sean M.; Leichenauer, Stefan; Pollack, Jason

    2016-08-01

    We derive a generalization of the second law of thermodynamics that uses Bayesian updates to explicitly incorporate the effects of a measurement of a system at some point in its evolution. By allowing an experimenter's knowledge to be updated by the measurement process, this formulation resolves a tension between the fact that the entropy of a statistical system can sometimes fluctuate downward and the information-theoretic idea that knowledge of a stochastically evolving system degrades over time. The Bayesian second law can be written as Δ H (ρm,ρ ) + F |m≥0 , where Δ H (ρm,ρ ) is the change in the cross entropy between the original phase-space probability distribution ρ and the measurement-updated distribution ρm and F |m is the expectation value of a generalized heat flow out of the system. We also derive refined versions of the second law that bound the entropy increase from below by a non-negative number, as well as Bayesian versions of integral fluctuation theorems. We demonstrate the formalism using simple analytical and numerical examples.

  10. Quantum Inference on Bayesian Networks

    Science.gov (United States)

    Yoder, Theodore; Low, Guang Hao; Chuang, Isaac

    2014-03-01

    Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.

  11. Neural Networks in Control Applications

    DEFF Research Database (Denmark)

    Sørensen, O.

    The intention of this report is to make a systematic examination of the possibilities of applying neural networks in those technical areas, which are familiar to a control engineer. In other words, the potential of neural networks in control applications is given higher priority than a detailed...... study of the networks themselves. With this end in view the following restrictions have been made: - Amongst numerous neural network structures, only the Multi Layer Perceptron (a feed-forward network) is applied. - Amongst numerous training algorithms, only four algorithms are examined, all in a...... recursive form (sample updating). The simplest is the Back Probagation Error Algorithm, and the most complex is the recursive Prediction Error Method using a Gauss-Newton search direction. - Over-fitting is often considered to be a serious problem when training neural networks. This problem is specifically...

  12. Bayesian detection of causal rare variants under posterior consistency.

    KAUST Repository

    Liang, Faming

    2013-07-26

    Identification of causal rare variants that are associated with complex traits poses a central challenge on genome-wide association studies. However, most current research focuses only on testing the global association whether the rare variants in a given genomic region are collectively associated with the trait. Although some recent work, e.g., the Bayesian risk index method, have tried to address this problem, it is unclear whether the causal rare variants can be consistently identified by them in the small-n-large-P situation. We develop a new Bayesian method, the so-called Bayesian Rare Variant Detector (BRVD), to tackle this problem. The new method simultaneously addresses two issues: (i) (Global association test) Are there any of the variants associated with the disease, and (ii) (Causal variant detection) Which variants, if any, are driving the association. The BRVD ensures the causal rare variants to be consistently identified in the small-n-large-P situation by imposing some appropriate prior distributions on the model and model specific parameters. The numerical results indicate that the BRVD is more powerful for testing the global association than the existing methods, such as the combined multivariate and collapsing test, weighted sum statistic test, RARECOVER, sequence kernel association test, and Bayesian risk index, and also more powerful for identification of causal rare variants than the Bayesian risk index method. The BRVD has also been successfully applied to the Early-Onset Myocardial Infarction (EOMI) Exome Sequence Data. It identified a few causal rare variants that have been verified in the literature.

  13. Bayesian detection of causal rare variants under posterior consistency.

    Directory of Open Access Journals (Sweden)

    Faming Liang

    Full Text Available Identification of causal rare variants that are associated with complex traits poses a central challenge on genome-wide association studies. However, most current research focuses only on testing the global association whether the rare variants in a given genomic region are collectively associated with the trait. Although some recent work, e.g., the Bayesian risk index method, have tried to address this problem, it is unclear whether the causal rare variants can be consistently identified by them in the small-n-large-P situation. We develop a new Bayesian method, the so-called Bayesian Rare Variant Detector (BRVD, to tackle this problem. The new method simultaneously addresses two issues: (i (Global association test Are there any of the variants associated with the disease, and (ii (Causal variant detection Which variants, if any, are driving the association. The BRVD ensures the causal rare variants to be consistently identified in the small-n-large-P situation by imposing some appropriate prior distributions on the model and model specific parameters. The numerical results indicate that the BRVD is more powerful for testing the global association than the existing methods, such as the combined multivariate and collapsing test, weighted sum statistic test, RARECOVER, sequence kernel association test, and Bayesian risk index, and also more powerful for identification of causal rare variants than the Bayesian risk index method. The BRVD has also been successfully applied to the Early-Onset Myocardial Infarction (EOMI Exome Sequence Data. It identified a few causal rare variants that have been verified in the literature.

  14. Bayesian modeling of ChIP-chip data using latent variables.

    KAUST Repository

    Wu, Mingqi

    2009-10-26

    BACKGROUND: The ChIP-chip technology has been used in a wide range of biomedical studies, such as identification of human transcription factor binding sites, investigation of DNA methylation, and investigation of histone modifications in animals and plants. Various methods have been proposed in the literature for analyzing the ChIP-chip data, such as the sliding window methods, the hidden Markov model-based methods, and Bayesian methods. Although, due to the integrated consideration of uncertainty of the models and model parameters, Bayesian methods can potentially work better than the other two classes of methods, the existing Bayesian methods do not perform satisfactorily. They usually require multiple replicates or some extra experimental information to parametrize the model, and long CPU time due to involving of MCMC simulations. RESULTS: In this paper, we propose a Bayesian latent model for the ChIP-chip data. The new model mainly differs from the existing Bayesian models, such as the joint deconvolution model, the hierarchical gamma mixture model, and the Bayesian hierarchical model, in two respects. Firstly, it works on the difference between the averaged treatment and control samples. This enables the use of a simple model for the data, which avoids the probe-specific effect and the sample (control/treatment) effect. As a consequence, this enables an efficient MCMC simulation of the posterior distribution of the model, and also makes the model more robust to the outliers. Secondly, it models the neighboring dependence of probes by introducing a latent indicator vector. A truncated Poisson prior distribution is assumed for the latent indicator variable, with the rationale being justified at length. CONCLUSION: The Bayesian latent method is successfully applied to real and ten simulated datasets, with comparisons with some of the existing Bayesian methods, hidden Markov model methods, and sliding window methods. The numerical results indicate that the

  15. Bayesian modeling of ChIP-chip data using latent variables

    Directory of Open Access Journals (Sweden)

    Tian Yanan

    2009-10-01

    Full Text Available Abstract Background The ChIP-chip technology has been used in a wide range of biomedical studies, such as identification of human transcription factor binding sites, investigation of DNA methylation, and investigation of histone modifications in animals and plants. Various methods have been proposed in the literature for analyzing the ChIP-chip data, such as the sliding window methods, the hidden Markov model-based methods, and Bayesian methods. Although, due to the integrated consideration of uncertainty of the models and model parameters, Bayesian methods can potentially work better than the other two classes of methods, the existing Bayesian methods do not perform satisfactorily. They usually require multiple replicates or some extra experimental information to parametrize the model, and long CPU time due to involving of MCMC simulations. Results In this paper, we propose a Bayesian latent model for the ChIP-chip data. The new model mainly differs from the existing Bayesian models, such as the joint deconvolution model, the hierarchical gamma mixture model, and the Bayesian hierarchical model, in two respects. Firstly, it works on the difference between the averaged treatment and control samples. This enables the use of a simple model for the data, which avoids the probe-specific effect and the sample (control/treatment effect. As a consequence, this enables an efficient MCMC simulation of the posterior distribution of the model, and also makes the model more robust to the outliers. Secondly, it models the neighboring dependence of probes by introducing a latent indicator vector. A truncated Poisson prior distribution is assumed for the latent indicator variable, with the rationale being justified at length. Conclusion The Bayesian latent method is successfully applied to real and ten simulated datasets, with comparisons with some of the existing Bayesian methods, hidden Markov model methods, and sliding window methods. The numerical results

  16. Bayesian Posterior Distributions Without Markov Chains

    OpenAIRE

    Cole, Stephen R.; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B.

    2012-01-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976–1983) assessing the relation between residential ex...

  17. Global Robust Stability of Switched Interval Neural Networks with Discrete and Distributed Time-Varying Delays of Neural Type

    OpenAIRE

    Qiangqiang Guo; Guohua Xu; Kewang Wang; Ning Li; Huaiqin Wu

    2012-01-01

    By combing the theories of the switched systems and the interval neural networks, the mathematics model of the switched interval neural networks with discrete and distributed time-varying delays of neural type is presented. A set of the interval parameter uncertainty neural networks with discrete and distributed time-varying delays of neural type are used as the individual subsystem, and an arbitrary switching rule is assumed to coordinate the switching between these networks. By applying the...

  18. Application of Bayesian methods to Dark Matter searches with XENON100

    International Nuclear Information System (INIS)

    The XENON100 experiment located in the LNGS Underground Lab in Italy, aims at the direct detection of WIMP dark matter (DM). It is currently the most sensitive detector for spin-independent WIMP-nucleus interaction. The DM analysis of XENON100 data is currently performed with a profile likelihood method after several cuts and data selection methods have been applied. A different model for the statistical analysis of data is the Bayesian interpretation. In the Bayesian approach to probability a prior probability (state of knowledge) is defined and updated for new sets of data to reject or accept a hypothesis. As an alternative approach a framework is being developed to implement Bayesian reasoning in the analysis. For this task the ''Bayesian Analysis Toolkit (BAT)'' will be used. Different models have to be implemented to identify background and (if there is a discovery) signal. We report on the current status of this work.

  19. Updating reliability data using feedback analysis: feasibility of a Bayesian subjective method

    International Nuclear Information System (INIS)

    For years, EDF has used Probabilistic Safety Assessment to evaluate a global indicator of the safety of its nuclear power plants and to optimize the performance while ensuring a certain safety level. Therefore, robustness and relevancy of PSA are very important. That is the reason why EDF wants to improve the relevancy of the reliability parameters used in these models. This article aims to propose a Bayesian approach to build PSA parameters when feedback data is not large enough to use the frequentist method. Our method is called subjective because its purpose is to give engineers pragmatic criteria to apply Bayesian in a controlled and consistent way. Using Bayesian is quite common for example in the United States, because the nuclear power plants are less standardized. Bayesian is often used with generic data as prior. So we have to adapt the general methodology within EDF context. (authors)

  20. Bayesian networks with applications in reliability analysis

    OpenAIRE

    Langseth, Helge

    2002-01-01

    A common goal of the papers in this thesis is to propose, formalize and exemplify the use of Bayesian networks as a modelling tool in reliability analysis. The papers span work in which Bayesian networks are merely used as a modelling tool (Paper I), work where models are specially designed to utilize the inference algorithms of Bayesian networks (Paper II and Paper III), and work where the focus has been on extending the applicability of Bayesian networks to very large domains (Paper IV and ...

  1. Learning a Flexible K-Dependence Bayesian Classifier from the Chain Rule of Joint Probability Distribution

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available As one of the most common types of graphical models, the Bayesian classifier has become an extremely popular approach to dealing with uncertainty and complexity. The scoring functions once proposed and widely used for a Bayesian network are not appropriate for a Bayesian classifier, in which class variable C is considered as a distinguished one. In this paper, we aim to clarify the working mechanism of Bayesian classifiers from the perspective of the chain rule of joint probability distribution. By establishing the mapping relationship between conditional probability distribution and mutual information, a new scoring function, Sum_MI, is derived and applied to evaluate the rationality of the Bayesian classifiers. To achieve global optimization and high dependence representation, the proposed learning algorithm, the flexible K-dependence Bayesian (FKDB classifier, applies greedy search to extract more information from the K-dependence network structure. Meanwhile, during the learning procedure, the optimal attribute order is determined dynamically, rather than rigidly. In the experimental study, functional dependency analysis is used to improve model interpretability when the structure complexity is restricted.

  2. Stability prediction of berm breakwater using neural network

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Rao, S.; Manjunath, Y.R.

    In the present study, an artificial neural network method has been applied to predict the stability of berm breakwaters. Four neural network models are constructed based on the parameters which influence the stability of breakwater. Training...

  3. Bayesian probabilities of earthquake occurrences in Longmenshan fault system (China)

    Science.gov (United States)

    Wang, Ying; Zhang, Keyin; Gan, Qigang; Zhou, Wen; Xiong, Liang; Zhang, Shihua; Liu, Chao

    2015-01-01

    China has a long history of earthquake records, and the Longmenshan fault system (LFS) is a famous earthquake zone. We believed that the LFS could be divided into three seismogenic zones (north, central, and south zones) based on the geological structures and the earthquake catalog. We applied the Bayesian probability method using extreme-value distribution of earthquake occurrences to estimate the seismic hazard in the LFS. The seismic moment, slip rate, earthquake recurrence rate, and magnitude were considered as the basic parameters for computing the Bayesian prior estimates of the seismicity. These estimates were then updated in terms of Bayes' theorem and historical estimates of seismicity in the LFS. Generally speaking, the north zone seemingly is quite peaceful compared with the central and south zones. The central zone is the most dangerous; however, the periodicity of earthquake occurrences for M s = 8.0 is quite long (1,250 to 5,000 years). The selection of upper bound probable magnitude influences the result, and the upper bound magnitude of the south zone maybe 7.5. We obtained the empirical relationship of magnitude conversion for M s and ML, the values of the magnitude of completeness Mc (3.5), and the Gutenberg-Richter b value before applying the Bayesian extreme-value distribution of earthquake occurrences method.

  4. Bayesian Concordance Correlation Coefficient with Application to Repeatedly Measured Data

    Directory of Open Access Journals (Sweden)

    Atanu BHATTACHARJEE

    2015-10-01

    Full Text Available Objective: In medical research, Lin's classical concordance correlation coefficient (CCC is frequently applied to evaluate the similarity of the measurements produced by different raters or methods on the same subjects. It is particularly useful for continuous data. The objective of this paper is to propose the Bayesian counterpart to compute CCC for continuous data. Material and Methods: A total of 33 patients of astrocytoma brain treated in the Department of Radiation Oncology at Malabar Cancer Centre is enrolled in this work. It is a continuous data of tumor volume and tumor size repeatedly measured during baseline pretreatment workup and post surgery follow-ups for all patients. The tumor volume and tumor size are measured separately by MRI and CT scan. The agreement of measurement between MRI and CT scan is calculated through CCC. The statistical inference is performed through Markov Chain Monte Carlo (MCMC technique. Results: Bayesian CCC is found suitable to get prominent evidence for test statistics to explore the relation between concordance measurements. The posterior mean estimates and 95% credible interval of CCC on tumor size and tumor volume are observed with 0.96(0.87,0.99 and 0.98(0.95,0.99 respectively. Conclusion: The Bayesian inference is adopted for development of the computational algorithm. The approach illustrated in this work provides the researchers an opportunity to find out the most appropriate model for specific data and apply CCC to fulfill the desired hypothesis.

  5. Adaptive Non-Linear Bayesian Filter for ECG Denoising

    Directory of Open Access Journals (Sweden)

    Mitesh Kumar Sao

    2014-06-01

    Full Text Available The cycles of an electrocardiogram (ECG signal contain three components P-wave, QRS complex and the T-wave. Noise is present in cardiograph as signals being measured in which biological resources (muscle contraction, base line drift, motion noise and environmental resources (power line interference, electrode contact noise, instrumentation noise are normally pollute ECG signal detected at the electrode. Visu-Shrink thresholding and Bayesian thresholding are the two filters based technique on wavelet method which is denoising the PLI noisy ECG signal. So thresholding techniques are applied for the effectiveness of ECG interval and compared the results with the wavelet soft and hard thresholding methods. The outputs are evaluated by calculating the root mean square (RMS, signal to noise ratio (SNR, correlation coefficient (CC and power spectral density (PSD using MATLAB software. The clean ECG signal shows Bayesian thresholding technique is more powerful algorithm for denoising.

  6. Photometric Redshift with Bayesian Priors on Physical Properties of Galaxies

    CERN Document Server

    Tanaka, Masayuki

    2015-01-01

    We present a proof-of-concept analysis of photometric redshifts with Bayesian priors on physical properties of galaxies. This concept is particularly suited for upcoming/on-going large imaging surveys, in which only several broad-band filters are available and it is hard to break some of the degeneracies in the multi-color space. We construct model templates of galaxies using a stellar population synthesis code and apply Bayesian priors on physical properties such as stellar mass and star formation rate. These priors are a function of redshift and they effectively evolve the templates with time in an observationally motivated way. We demonstrate that the priors help reduce the degeneracy and deliver significantly improved photometric redshifts. Furthermore, we show that a template error function, which corrects for systematic flux errors in the model templates as a function of rest-frame wavelength, delivers further improvements. One great advantage of our technique is that we simultaneously measure redshifts...

  7. Bayesian Framework in Repeated-Play Decision Making

    Directory of Open Access Journals (Sweden)

    Yohei Kobayashi

    2012-01-01

    Full Text Available Problem statement: There have been much reported on decisions from experience, also referred to as decisions in a complete ignorance fashion. Approach: This note lays out a Bayesian decision-theoretical framework that provides a computable account for decisions from experience. Results: To make the framework more tractable, this note sets up and examines decisions in an incomplete ignorance fashion. The current discussion asserts that well-known behavioural effects, such as the hot stove effect and the Bayesian framework may lead to different predictions. Conclusion/Recommendations: The framework is applied to the continuity form to predict a possibility from their experience. We conclude that the reasonable prediction is sometimes leads them to the unreasonable conditions.

  8. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  9. Bayesian modeling growth curves for quail assuming skewness in errors

    Directory of Open Access Journals (Sweden)

    Robson Marcelo Rossi

    2014-06-01

    Full Text Available Bayesian modeling growth curves for quail assuming skewness in errors - To assume normal distributions in the data analysis is common in different areas of the knowledge. However we can make use of the other distributions that are capable to model the skewness parameter in the situations that is needed to model data with tails heavier than the normal. This article intend to present alternatives to the assumption of the normality in the errors, adding asymmetric distributions. A Bayesian approach is proposed to fit nonlinear models when the errors are not normal, thus, the distributions t, skew-normal and skew-t are adopted. The methodology is intended to apply to different growth curves to the quail body weights. It was found that the Gompertz model assuming skew-normal errors and skew-t errors, respectively for male and female, were the best fitted to the data.

  10. A localization model to localize multiple sources using Bayesian inference

    Science.gov (United States)

    Dunham, Joshua Rolv

    Accurate localization of a sound source in a room setting is important in both psychoacoustics and architectural acoustics. Binaural models have been proposed to explain how the brain processes and utilizes the interaural time differences (ITDs) and interaural level differences (ILDs) of sound waves arriving at the ears of a listener in determining source location. Recent work shows that applying Bayesian methods to this problem is proving fruitful. In this thesis, pink noise samples are convolved with head-related transfer functions (HRTFs) and compared to combinations of one and two anechoic speech signals convolved with different HRTFs or binaural room impulse responses (BRIRs) to simulate room positions. Through exhaustive calculation of Bayesian posterior probabilities and using a maximal likelihood approach, model selection will determine the number of sources present, and parameter estimation will result in azimuthal direction of the source(s).

  11. Bayesian Calibration of Generalized Pools of Predictive Distributions

    Directory of Open Access Journals (Sweden)

    Roberto Casarin

    2016-03-01

    Full Text Available Decision-makers often consult different experts to build reliable forecasts on variables of interest. Combining more opinions and calibrating them to maximize the forecast accuracy is consequently a crucial issue in several economic problems. This paper applies a Bayesian beta mixture model to derive a combined and calibrated density function using random calibration functionals and random combination weights. In particular, it compares the application of linear, harmonic and logarithmic pooling in the Bayesian combination approach. The three combination schemes, i.e., linear, harmonic and logarithmic, are studied in simulation examples with multimodal densities and an empirical application with a large database of stock data. All of the experiments show that in a beta mixture calibration framework, the three combination schemes are substantially equivalent, achieving calibration, and no clear preference for one of them appears. The financial application shows that the linear pooling together with beta mixture calibration achieves the best results in terms of calibrated forecast.

  12. Decision Support System for Maintenance Management Using Bayesian Networks

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The maintenance process has undergone several major developments that have led to proactive considerations and the transformation from the traditional "fail and fix" practice into the "predict and prevent" proactive maintenance methodology. The anticipation action, which characterizes this proactive maintenance strategy is mainly based on monitoring, diagnosis, prognosis and decision-making modules. Oil monitoring is a key component of a successful condition monitoring program. It can be used as a proactive tool to identify the wear modes of rubbing parts and diagnoses the faults in machinery. But diagnosis relying on oil analysis technology must deal with uncertain knowledge and fuzzy input data. Besides other methods, Bayesian Networks have been extensively applied to fault diagnosis with the advantages of uncertainty inference; however, in the area of oil monitoring, it is a new field. This paper presents an integrated Bayesian network based decision support for maintenance of diesel engines.

  13. A Bayesian Analysis of the Radioactive Releases of Fukushima

    DEFF Research Database (Denmark)

    Tomioka, Ryota; Mørup, Morten

    2012-01-01

    types of nuclides and their levels of concentration from the recorded mixture of radiations to take necessary measures. We presently formulate a Bayesian generative model for the data available on radioactive releases from the Fukushima Daiichi disaster across Japan. From the sparsely sampled...... Fukushima Daiichi plant we establish that the model is able to account for the data. We further demonstrate how the model extends to include all the available measurements recorded throughout Japan. The model can be considered a first attempt to apply Bayesian learning unsupervised in order to give a more......The Fukushima Daiichi disaster 11 March, 2011 is considered the largest nuclear accident since the 1986 Chernobyl disaster and has been rated at level 7 on the International Nuclear Event Scale. As different radioactive materials have different effects to human body, it is important to know the...

  14. A Bayesian nonlinear mixed-effects disease progression model

    Science.gov (United States)

    Kim, Seongho; Jang, Hyejeong; Wu, Dongfeng; Abrams, Judith

    2016-01-01

    A nonlinear mixed-effects approach is developed for disease progression models that incorporate variation in age in a Bayesian framework. We further generalize the probability model for sensitivity to depend on age at diagnosis, time spent in the preclinical state and sojourn time. The developed models are then applied to the Johns Hopkins Lung Project data and the Health Insurance Plan for Greater New York data using Bayesian Markov chain Monte Carlo and are compared with the estimation method that does not consider random-effects from age. Using the developed models, we obtain not only age-specific individual-level distributions, but also population-level distributions of sensitivity, sojourn time and transition probability. PMID:26798562

  15. NEURAL CRYPTOGRAPHY

    OpenAIRE

    PROTIC DANIJELA D.

    2016-01-01

    Neural cryptography based on the tree parity machine (TPM) is presented in this paper. A mutual learning-based synchronization of two networks is studied. The training of the TPM based on the Hebbian, anti-Hebbian and random walk as well as on the secure key generation protocol is described. The most important attacks on the key generation process are shown.

  16. Bayesian phylogeography finds its roots.

    Directory of Open Access Journals (Sweden)

    Philippe Lemey

    2009-09-01

    Full Text Available As a key factor in endemic and epidemic dynamics, the geographical distribution of viruses has been frequently interpreted in the light of their genetic histories. Unfortunately, inference of historical dispersal or migration patterns of viruses has mainly been restricted to model-free heuristic approaches that provide little insight into the temporal setting of the spatial dynamics. The introduction of probabilistic models of evolution, however, offers unique opportunities to engage in this statistical endeavor. Here we introduce a Bayesian framework for inference, visualization and hypothesis testing of phylogeographic history. By implementing character mapping in a Bayesian software that samples time-scaled phylogenies, we enable the reconstruction of timed viral dispersal patterns while accommodating phylogenetic uncertainty. Standard Markov model inference is extended with a stochastic search variable selection procedure that identifies the parsimonious descriptions of the diffusion process. In addition, we propose priors that can incorporate geographical sampling distributions or characterize alternative hypotheses about the spatial dynamics. To visualize the spatial and temporal information, we summarize inferences using virtual globe software. We describe how Bayesian phylogeography compares with previous parsimony analysis in the investigation of the influenza A H5N1 origin and H5N1 epidemiological linkage among sampling localities. Analysis of rabies in West African dog populations reveals how virus diffusion may enable endemic maintenance through continuous epidemic cycles. From these analyses, we conclude that our phylogeographic framework will make an important asset in molecular epidemiology that can be easily generalized to infer biogeogeography from genetic data for many organisms.

  17. Bayesian Methods and Universal Darwinism

    Science.gov (United States)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  18. Experimental validation of a Bayesian model of visual acuity.

    LENUS (Irish Health Repository)

    Dalimier, Eugénie

    2009-01-01

    Based on standard procedures used in optometry clinics, we compare measurements of visual acuity for 10 subjects (11 eyes tested) in the presence of natural ocular aberrations and different degrees of induced defocus, with the predictions given by a Bayesian model customized with aberrometric data of the eye. The absolute predictions of the model, without any adjustment, show good agreement with the experimental data, in terms of correlation and absolute error. The efficiency of the model is discussed in comparison with image quality metrics and other customized visual process models. An analysis of the importance and customization of each stage of the model is also given; it stresses the potential high predictive power from precise modeling of ocular and neural transfer functions.

  19. Bayesian Query-Focused Summarization

    CERN Document Server

    Daumé, Hal

    2009-01-01

    We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.

  20. Numeracy, frequency, and Bayesian reasoning

    Directory of Open Access Journals (Sweden)

    Gretchen B. Chapman

    2009-02-01

    Full Text Available Previous research has demonstrated that Bayesian reasoning performance is improved if uncertainty information is presented as natural frequencies rather than single-event probabilities. A questionnaire study of 342 college students replicated this effect but also found that the performance-boosting benefits of the natural frequency presentation occurred primarily for participants who scored high in numeracy. This finding suggests that even comprehension and manipulation of natural frequencies requires a certain threshold of numeracy abilities, and that the beneficial effects of natural frequency presentation may not be as general as previously believed.

  1. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  2. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  3. Bayesian analysis for extreme climatic events: A review

    Science.gov (United States)

    Chu, Pao-Shin; Zhao, Xin

    2011-11-01

    This article reviews Bayesian analysis methods applied to extreme climatic data. We particularly focus on applications to three different problems related to extreme climatic events including detection of abrupt regime shifts, clustering tropical cyclone tracks, and statistical forecasting for seasonal tropical cyclone activity. For identifying potential change points in an extreme event count series, a hierarchical Bayesian framework involving three layers - data, parameter, and hypothesis - is formulated to demonstrate the posterior probability of the shifts throughout the time. For the data layer, a Poisson process with a gamma distributed rate is presumed. For the hypothesis layer, multiple candidate hypotheses with different change-points are considered. To calculate the posterior probability for each hypothesis and its associated parameters we developed an exact analytical formula, a Markov Chain Monte Carlo (MCMC) algorithm, and a more sophisticated reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm. The algorithms are applied to several rare event series: the annual tropical cyclone or typhoon counts over the central, eastern, and western North Pacific; the annual extremely heavy rainfall event counts at Manoa, Hawaii; and the annual heat wave frequency in France. Using an Expectation-Maximization (EM) algorithm, a Bayesian clustering method built on a mixture Gaussian model is applied to objectively classify historical, spaghetti-like tropical cyclone tracks (1945-2007) over the western North Pacific and the South China Sea into eight distinct track types. A regression based approach to forecasting seasonal tropical cyclone frequency in a region is developed. Specifically, by adopting large-scale environmental conditions prior to the tropical cyclone season, a Poisson regression model is built for predicting seasonal tropical cyclone counts, and a probit regression model is alternatively developed toward a binary classification problem. With a non

  4. Trends in neural network technology. Neural network gijutsu no doko

    Energy Technology Data Exchange (ETDEWEB)

    Nishimura, K. (Toshiba Corp., Tokyo (Japan))

    1991-12-01

    The present and future of neural network technologies were reviewed. Neural networks simulate the neurons and synapses of human brain, thus permitting the utilization of heuristic knowledge difficult to describe in a logical manner. Such networks can therefore solve optimization problems, difficult to solve by conventional computers, more rapidly while sacrificing a permissible degree of rigor. In light of these advantages, many attempts have been made to apply neural networks to a variety of engineering fields including character recognition, phonetic recognition diagnosis, operation and so on. Now that these attempts have demonstrated the great potential of neural network technology, its application to practical problems will receive increasing attention. The necessity for fundamental studies on learning algorithms, modularization techniques, hardware technologies and so on will grow in conjunction with the above trends in application. 20 refs., 11 figs., 1 tab.

  5. Bayesian credible interval construction for Poisson statistics

    Institute of Scientific and Technical Information of China (English)

    ZHU Yong-Sheng

    2008-01-01

    The construction of the Bayesian credible (confidence) interval for a Poisson observable including both the signal and background with and without systematic uncertainties is presented.Introducing the conditional probability satisfying the requirement of the background not larger than the observed events to construct the Bayesian credible interval is also discussed.A Fortran routine,BPOCI,has been developed to implement the calculation.

  6. Bayesian Decision Theoretical Framework for Clustering

    Science.gov (United States)

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  7. Using Bayesian Networks to Improve Knowledge Assessment

    Science.gov (United States)

    Millan, Eva; Descalco, Luis; Castillo, Gladys; Oliveira, Paula; Diogo, Sandra

    2013-01-01

    In this paper, we describe the integration and evaluation of an existing generic Bayesian student model (GBSM) into an existing computerized testing system within the Mathematics Education Project (PmatE--Projecto Matematica Ensino) of the University of Aveiro. This generic Bayesian student model had been previously evaluated with simulated…

  8. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...... for complex networks can be derived and point out relevant literature....

  9. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan

    2004-01-01

    We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating and ...

  10. Compiling Relational Bayesian Networks for Exact Inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark

    We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by eva...

  11. Bayesian analysis of exoplanet and binary orbits

    CERN Document Server

    Schulze-Hartung, Tim; Henning, Thomas

    2012-01-01

    We introduce BASE (Bayesian astrometric and spectroscopic exoplanet detection and characterisation tool), a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The capabilities of BASE are demonstrated using all publicly available data of the binary Mizar A.

  12. Computational methods for Bayesian model choice

    OpenAIRE

    Robert, Christian P.; Wraith, Darren

    2009-01-01

    In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.

  13. Bayesian data assimilation in shape registration

    KAUST Repository

    Cotter, C J

    2013-03-28

    In this paper we apply a Bayesian framework to the problem of geodesic curve matching. Given a template curve, the geodesic equations provide a mapping from initial conditions for the conjugate momentum onto topologically equivalent shapes. Here, we aim to recover the well-defined posterior distribution on the initial momentum which gives rise to observed points on the target curve; this is achieved by explicitly including a reparameterization in the formulation. Appropriate priors are chosen for the functions which together determine this field and the positions of the observation points, the initial momentum p0 and the reparameterization vector field ν, informed by regularity results about the forward model. Having done this, we illustrate how maximum likelihood estimators can be used to find regions of high posterior density, but also how we can apply recently developed Markov chain Monte Carlo methods on function spaces to characterize the whole of the posterior density. These illustrative examples also include scenarios where the posterior distribution is multimodal and irregular, leading us to the conclusion that knowledge of a state of global maximal posterior density does not always give us the whole picture, and full posterior sampling can give better quantification of likely states and the overall uncertainty inherent in the problem. © 2013 IOP Publishing Ltd.

  14. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms. PMID:27046838

  15. 2nd Bayesian Young Statisticians Meeting

    CERN Document Server

    Bitto, Angela; Kastner, Gregor; Posekany, Alexandra

    2015-01-01

    The Second Bayesian Young Statisticians Meeting (BAYSM 2014) and the research presented here facilitate connections among researchers using Bayesian Statistics by providing a forum for the development and exchange of ideas. WU Vienna University of Business and Economics hosted BAYSM 2014 from September 18th to 19th. The guidance of renowned plenary lecturers and senior discussants is a critical part of the meeting and this volume, which follows publication of contributions from BAYSM 2013. The meeting's scientific program reflected the variety of fields in which Bayesian methods are currently employed or could be introduced in the future. Three brilliant keynote lectures by Chris Holmes (University of Oxford), Christian Robert (Université Paris-Dauphine), and Mike West (Duke University), were complemented by 24 plenary talks covering the major topics Dynamic Models, Applications, Bayesian Nonparametrics, Biostatistics, Bayesian Methods in Economics, and Models and Methods, as well as a lively poster session ...

  16. Handbook on neural information processing

    CERN Document Server

    Maggini, Marco; Jain, Lakhmi

    2013-01-01

    This handbook presents some of the most recent topics in neural information processing, covering both theoretical concepts and practical applications. The contributions include:                         Deep architectures                         Recurrent, recursive, and graph neural networks                         Cellular neural networks                         Bayesian networks                         Approximation capabilities of neural networks                         Semi-supervised learning                         Statistical relational learning                         Kernel methods for structured data                         Multiple classifier systems                         Self organisation and modal learning                         Applications to ...

  17. Use of SAMC for Bayesian analysis of statistical models with intractable normalizing constants

    KAUST Repository

    Jin, Ick Hoon

    2014-03-01

    Statistical inference for the models with intractable normalizing constants has attracted much attention. During the past two decades, various approximation- or simulation-based methods have been proposed for the problem, such as the Monte Carlo maximum likelihood method and the auxiliary variable Markov chain Monte Carlo methods. The Bayesian stochastic approximation Monte Carlo algorithm specifically addresses this problem: It works by sampling from a sequence of approximate distributions with their average converging to the target posterior distribution, where the approximate distributions can be achieved using the stochastic approximation Monte Carlo algorithm. A strong law of large numbers is established for the Bayesian stochastic approximation Monte Carlo estimator under mild conditions. Compared to the Monte Carlo maximum likelihood method, the Bayesian stochastic approximation Monte Carlo algorithm is more robust to the initial guess of model parameters. Compared to the auxiliary variable MCMC methods, the Bayesian stochastic approximation Monte Carlo algorithm avoids the requirement for perfect samples, and thus can be applied to many models for which perfect sampling is not available or very expensive. The Bayesian stochastic approximation Monte Carlo algorithm also provides a general framework for approximate Bayesian analysis. © 2012 Elsevier B.V. All rights reserved.

  18. A Bayesian Surrogate Model for Rapid Time Series Analysis and Application to Exoplanet Observations

    CERN Document Server

    Ford, Eric B; Veras, Dimitri

    2011-01-01

    We present a Bayesian surrogate model for the analysis of periodic or quasi-periodic time series data. We describe a computationally efficient implementation that enables Bayesian model comparison. We apply this model to simulated and real exoplanet observations. We discuss the results and demonstrate some of the challenges for applying our surrogate model to realistic exoplanet data sets. In particular, we find that analyses of real world data should pay careful attention to the effects of uneven spacing of observations and the choice of prior for the "jitter" parameter.

  19. Uncertainty analysis using Beta-Bayesian approach in nuclear safety code validation

    International Nuclear Information System (INIS)

    Highlights: • To meet the 95/95 criterion, the Wilks’ method is identical to the Bayesian approach. • A prior selection in Bayesian approach is of strong influenced on the code run times. • It is possible to utilize prior experience to reduce code runs to meet the 95/95 criterion. • The variation of the probability for each code runs is provided. - Abstract: Since best-estimate plus uncertainty analysis was approved by Nuclear Regulatory Commission for nuclear reactor safety evaluation, several uncertainty assessment methods have been proposed and applied in the framework of best-estimate code validation in nuclear industry. Among them, the Wilks’ method and Bayesian approach are the two most popular statistical methods for uncertainty quantification. This study explores the inherent relation between the two methods using the Beta distribution function as the prior in the Bayesian analysis. Subsequently, the Wilks’ method can be considered as a special case of Beta-Bayesian approach, equivalent to the conservative case with Wallis’ “pessimistic” prior in the Bayesian analysis. However, the results do depend on the choice of the pessimistic prior function forms. The analysis of mean and variance through Beta-Bayesian approach provides insight into the Wilks’ 95/95 results with different orders. It indicates that the 95/95 results of Wilks’ method become more accurate and more precise with the increasing of the order. Furthermore, Bayesian updating process is well demonstrated in the code validation practice. The selection of updating prior can make use of the current experience of the code failure and success statistics, so as to effectively predict further needed number of numerical simulations to reach the 95/95 criterion

  20. Computational modeling of neural activities for statistical inference

    CERN Document Server

    Kolossa, Antonio

    2016-01-01

    This authored monograph supplies empirical evidence for the Bayesian brain hypothesis by modeling event-related potentials (ERP) of the human electroencephalogram (EEG) during successive trials in cognitive tasks. The employed observer models are useful to compute probability distributions over observable events and hidden states, depending on which are present in the respective tasks. Bayesian model selection is then used to choose the model which best explains the ERP amplitude fluctuations. Thus, this book constitutes a decisive step towards a better understanding of the neural coding and computing of probabilities following Bayesian rules. The target audience primarily comprises research experts in the field of computational neurosciences, but the book may also be beneficial for graduate students who want to specialize in this field. .