Bayesian regularization of diffusion tensor images
DEFF Research Database (Denmark)
Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif;
2007-01-01
several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...
Tensor completion for PDEs with uncertain coefficients and Bayesian Update
Litvinenko, Alexander
2017-03-05
In this work, we tried to show connections between Bayesian update and tensor completion techniques. Usually, only a small/sparse vector/tensor of measurements is available. The typical measurement is a function of the solution. The solution of a stochastic PDE is a tensor, the measurement as well. The idea is to use completion techniques to compute all "missing" values of the measurement tensor and only then apply the Bayesian technique.
Bayesian approach to magnetotelluric tensor decomposition
Directory of Open Access Journals (Sweden)
Michel Menvielle
2010-05-01
;} -->
Magnetotelluric directional analysis and impedance tensor decomposition are basic tools to validate a local/regional composite electrical model of the underlying structure. Bayesian stochastic methods approach the problem of the parameter estimation and their uncertainty characterization in a fully probabilistic fashion, through the use of posterior model probabilities.We use the standard GroomBailey 3D local/2D regional composite model in our bayesian approach. We assume that the experimental impedance estimates are contamined with the Gaussian noise and define the likelihood of a particular composite model with respect to the observed data. We use noninformative, flat priors over physically reasonable intervals for the standard GroomBailey decomposition parameters. We apply two numerical methods, the Markov chain Monte Carlo procedure based on the Gibbs sampler and a singlecomponent adaptive Metropolis algorithm. From the posterior samples, we characterize the estimates and uncertainties of the individual decomposition parameters by using the respective marginal posterior probabilities. We conclude that the stochastic scheme performs reliably for a variety of models, including the multisite and multifrequency case with up to
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
From Crofton's formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Surface tensor estimation from linear sections
DEFF Research Database (Denmark)
Kousholt, Astrid; Kiderlen, Markus; Hug, Daniel
2015-01-01
From Crofton’s formula for Minkowski tensors we derive stereological estimators of translation invariant surface tensors of convex bodies in the n-dimensional Euclidean space. The estimators are based on one-dimensional linear sections. In a design based setting we suggest three types of estimators....... These are based on isotropic uniform random lines, vertical sections, and non-isotropic random lines, respectively. Further, we derive estimators of the specific surface tensors associated with a stationary process of convex particles in the model based setting....
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in
Bayesian Missile System Reliability from Point Estimates
2014-10-28
OCT 2014 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Bayesian Missile System Reliability from Point Estimates 5a. CONTRACT...Principle (MEP) to convert point estimates to probability distributions to be used as priors for Bayesian reliability analysis of missile data, and...illustrate this approach by applying the priors to a Bayesian reliability model of a missile system. 15. SUBJECT TERMS priors, Bayesian , missile
SYNTHESIZED EXPECTED BAYESIAN METHOD OF PARAMETRIC ESTIMATE
Institute of Scientific and Technical Information of China (English)
Ming HAN; Yuanyao DING
2004-01-01
This paper develops a new method of parametric estimate, which is named as "synthesized expected Bayesian method". When samples of products are tested and no failure events occur, thedefinition of expected Bayesian estimate is introduced and the estimates of failure probability and failure rate are provided. After some failure information is introduced by making an extra-test, a synthesized expected Bayesian method is defined and used to estimate failure probability, failure rateand some other parameters in exponential distribution and Weibull distribution of populations. Finally,calculations are performed according to practical problems, which show that the synthesized expected Bayesian method is feasible and easy to operate.
Contending non-double-couple source components with hierarchical Bayesian moment tensor inversion
Mustac, M.; Tkalcic, H.
2015-12-01
Seismic moment tensors can aid the discrimination between earthquakes and explosions. However, the isotropic component can be found in a large number of earthquakes simply as a consequence of earthquake location, poorly modeled structure or noise in the data. Recently, we have developed a method for moment tensor inversion, capable of retrieving parameter uncertainties together with their optimal values, using probabilistic Bayesian inference. It has been applied to data from a complex volcanic environment in the Long Valley Caldera (LVC), California, and confirmed a large isotropic source component. We now extend the application to two different environments where the existence of non-double-couple source components is likely. The method includes notable treatment of the noise, utilizing pre-event noise to estimate the noise covariance matrix. This is extended throughout the inversion, where noise variance is a "hyperparameter" that determines the level of data fit. On top of that, different noise parameters for each station are used as weights, naturally penalizing stations with noisy data. In the LVC case, this means increasing the amount of information from two stations at moderate distances, which results in a 1 km deeper estimate for the source location. This causes a change in the non-double-couple components in the inversions assuming a simple diagonal or exponential covariance matrix, but not in the inversion assuming a more complicated, attenuated cosine covariance matrix. Combining a sophisticated noise treatment with a powerful Bayesian inversion technique can give meaningful uncertainty estimates for long-period (20-50 s) data, provided an appropriate regional structure model.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
Bayesian estimation and tracking a practical guide
Haug, Anton J
2012-01-01
A practical approach to estimating and tracking dynamic systems in real-worl applications Much of the literature on performing estimation for non-Gaussian systems is short on practical methodology, while Gaussian methods often lack a cohesive derivation. Bayesian Estimation and Tracking addresses the gap in the field on both accounts, providing readers with a comprehensive overview of methods for estimating both linear and nonlinear dynamic systems driven by Gaussian and non-Gaussian noices. Featuring a unified approach to Bayesian estimation and tracking, the book emphasizes the derivation
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
A Bayesian Framework for Combining Valuation Estimates
Yee, Kenton K
2007-01-01
Obtaining more accurate equity value estimates is the starting point for stock selection, value-based indexing in a noisy market, and beating benchmark indices through tactical style rotation. Unfortunately, discounted cash flow, method of comparables, and fundamental analysis typically yield discrepant valuation estimates. Moreover, the valuation estimates typically disagree with market price. Can one form a superior valuation estimate by averaging over the individual estimates, including market price? This article suggests a Bayesian framework for combining two or more estimates into a superior valuation estimate. The framework justifies the common practice of averaging over several estimates to arrive at a final point estimate.
Bayesian Estimation Supersedes the "t" Test
Kruschke, John K.
2013-01-01
Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is…
Approximation for Bayesian Ability Estimation.
1987-02-18
posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31
Phylogenetic estimation with partial likelihood tensors
Sumner, J G
2008-01-01
We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach.
Tensor DoA estimation with directional elements
Raimondi, Francesca; DELEVOYE, Elisabeth; Comon, Pierre
2016-01-01
This paper introduces directivity gain pattern as a physical diversity for tensor array processing, in addition to time and space shift diversities. We show that tensor formulation allows to estimate Directions of Arrival (DoAs) under the assumption of unknown gain pattern, improving the performance of the omnidirectional case. The proposed approach is then applied to biologically inspired acoustic elements.
Estimates of the Nucleon Tensor Charge
Gamberg, L P; Gamberg, Leonard; Goldstein, Gary R.
2001-01-01
Like the axial vector charges, defined from the forward nucleon matrix element of the axial vector current on the light cone, the nucleon tensor charge, defined from the corresponding matrix element of the tensor current, is essential for characterizing the momentum and spin structure of the nucleon. Because there must be a helicity flip of the struck quark in order to probe the transverse spin polarization of the nucleon, the transversity distribution (and thus the tensor charge) decouples at leading twist in deep inelastic scattering, although no such suppression appears in Drell-Yan processes. This makes the tensor charge difficult to measure and its non-conservation makes its prediction model dependent. We present a different approach. Exploiting an approximate SU(6)xO(3) symmetric mass degeneracy of the light axial vector mesons (a1(1260), b1(1235) and h1(1170)) and using pole dominance, we calculate the tensor charge. The result is simple in form and depends on the decay constants of the axial vector me...
Exact and Approximate Quadratures for Curvature Tensor Estimation
Langer, Torsten; Belyaev, Alexander; Seidel, Hans-Peter; Greiner, Günther; Hornegger, Joachim; Niemann, Heinrich; Stamminger, Marc
2005-01-01
Accurate estimations of geometric properties of a surface from its discrete approximation are important for many computer graphics and geometric modeling applications. In this paper, we derive exact quadrature formulae for mean curvature, Gaussian curvature, and the Taubin integral representation of the curvature tensor. The exact quadratures are then used to obtain reliable estimates of the curvature tensor of a smooth surface approximated by a dense triangle me...
Bayesian Estimation of Thermonuclear Reaction Rates
Iliadis, Christian; Coc, Alain; Timmes, Frank; Starrfield, Sumner
2016-01-01
The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied in the past to this problem, all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extra-solar planets, gravitational waves, and type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present the first astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the d(p,$\\gamma$)$^3$He, $^3$He($^3$He,2p)$^4$He, and $^3$He($\\alpha$,$\\gamma$)$^7$Be reactions,...
Bayesian Estimation of Thermonuclear Reaction Rates
Iliadis, C.; Anderson, K. S.; Coc, A.; Timmes, F. X.; Starrfield, S.
2016-11-01
The problem of estimating non-resonant astrophysical S-factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We present astrophysical S-factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p,γ)3He, 3He(3He,2p)4He, and 3He(α,γ)7Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.
A Fast Iterative Bayesian Inference Algorithm for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2013-01-01
representation of the Bessel K probability density function; a highly efficient, fast iterative Bayesian inference method is then applied to the proposed model. The resulting estimator outperforms other state-of-the-art Bayesian and non-Bayesian estimators, either by yielding lower mean squared estimation error...
Bayesian parameter estimation for effective field theories
Wesolowski, S; Furnstahl, R J; Phillips, D R; Thapaliya, A
2015-01-01
We present procedures based on Bayesian statistics for effective field theory (EFT) parameter estimation from data. The extraction of low-energy constants (LECs) is guided by theoretical expectations that supplement such information in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools are developed that analyze the fit and ensure that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems and the extraction of LECs for the nucleon mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Bayesian parameter estimation for effective field theories
Wesolowski, S.; Klco, N.; Furnstahl, R. J.; Phillips, D. R.; Thapaliya, A.
2016-07-01
We present procedures based on Bayesian statistics for estimating, from data, the parameters of effective field theories (EFTs). The extraction of low-energy constants (LECs) is guided by theoretical expectations in a quantifiable way through the specification of Bayesian priors. A prior for natural-sized LECs reduces the possibility of overfitting, and leads to a consistent accounting of different sources of uncertainty. A set of diagnostic tools is developed that analyzes the fit and ensures that the priors do not bias the EFT parameter estimation. The procedures are illustrated using representative model problems, including the extraction of LECs for the nucleon-mass expansion in SU(2) chiral perturbation theory from synthetic lattice data.
Bayesian mixture models for spectral density estimation
Cadonna, Annalisa
2017-01-01
We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...
Bayesian nonparametric estimation for Quantum Homodyne Tomography
Naulet, Zacharie; Barat, Eric
2016-01-01
We estimate the quantum state of a light beam from results of quantum homodyne tomography noisy measurements performed on identically prepared quantum systems. We propose two Bayesian nonparametric approaches. The first approach is based on mixture models and is illustrated through simulation examples. The second approach is based on random basis expansions. We study the theoretical performance of the second approach by quantifying the rate of contraction of the posterior distribution around ...
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties are confi...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
An Approximate Bayesian Fundamental Frequency Estimator
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2012-01-01
Joint fundamental frequency and model order estimation is an important problem in several applications such as speech and music processing. In this paper, we develop an approximate estimation algorithm of these quantities using Bayesian inference. The inference about the fundamental frequency...... and the model order is based on a probability model which corresponds to a minimum of prior information. From this probability model, we give the exact posterior distributions on the fundamental frequency and the model order, and we also present analytical approximations of these distributions which lower...
Bayesian feature selection to estimate customer survival
Figini, Silvia; Giudici, Paolo; Brooks, S P
2006-01-01
We consider the problem of estimating the lifetime value of customers, when a large number of features are present in the data. In order to measure lifetime value we use survival analysis models to estimate customer tenure. In such a context, a number of classical modelling challenges arise. We will show how our proposed Bayesian methods perform, and compare it with classical churn models on a real case study. More specifically, based on data from a media service company, our aim will be to p...
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Bayesian integer frequency offset estimator for MIMO-OFDM systems
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Carrier frequency offset (CFO) in MIMO-OFDM systems can be decoupled into two parts: fraction frequency offset (FFO) and integer frequency offset (IFO). The problem of IFO estimation is addressed and a new IFO estimator based on the Bayesian philosophy is proposed. Also, it is shown that the Bayesian IFO estimator is optimal among all the IFO estimators. Furthermore, the Bayesian estimator can take advantage of oversampling so that better performance can be obtained. Finally, numerical results show the optimality of the Bayesian estimator and validate the theoretical analysis.
Robust Estimation of Trifocal Tensor Using Messy Genetic Algorithm
Institute of Scientific and Technical Information of China (English)
HUMingxing; YUANBaozong; TANGXiaofang
2003-01-01
Given three partially overlapping views of a scene from which a set of point or line correspondences have been extracted, 3D structure and camera motion pa-rameters can be represented by the trifocal tensor, which is the key to many problems of computer vision among three views. This paper addresses the problem of robust esti-mating the trifocal tensor employing a new method based on messy genetic algorithm, which uses each gene to stand for a triplet of correspondences, and takes every chromo-some as a minimum subset for trifocal tensor estimation.The method would eventually converge to a near optimal solution and is relatively unaffected by the outliers. Exper-iments with both synthetic data and real images show that our method is more robust and precise than other typical methods because it can efficiently detect and delete the bad corresponding points, which include both bad loca-tions and false matches.
Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging.
Tabesh, Ali; Jensen, Jens H; Ardekani, Babak A; Helpern, Joseph A
2011-03-01
This article presents two related advancements to the diffusional kurtosis imaging estimation framework to increase its robustness to noise, motion, and imaging artifacts. The first advancement substantially improves the estimation of diffusion and kurtosis tensors parameterizing the diffusional kurtosis imaging model. Rather than utilizing conventional unconstrained least squares methods, the tensor estimation problem is formulated as linearly constrained linear least squares, where the constraints ensure physically and/or biologically plausible tensor estimates. The exact solution to the constrained problem is found via convex quadratic programming methods or, alternatively, an approximate solution is determined through a fast heuristic algorithm. The computationally more demanding quadratic programming-based method is more flexible, allowing for an arbitrary number of diffusion weightings and different gradient sets for each diffusion weighting. The heuristic algorithm is suitable for real-time settings such as on clinical scanners, where run time is crucial. The advantage offered by the proposed constrained algorithms is demonstrated using in vivo human brain images. The proposed constrained methods allow for shorter scan times and/or higher spatial resolution for a given fidelity of the diffusional kurtosis imaging parametric maps. The second advancement increases the efficiency and accuracy of the estimation of mean and radial kurtoses by applying exact closed-form formulae.
Estimation of Tensors and Tensor-Derived Measures in Diffusional Kurtosis Imaging1
Tabesh, Ali; Jensen, Jens H.; Ardekani, Babak A.; Helpern, Joseph A.
2010-01-01
This paper presents two related advancements to the diffusional kurtosis imaging (DKI) estimation framework to increase its robustness to noise, motion, and imaging artifacts. The first advancement substantially improves the estimation of diffusion and kurtosis tensors parameterizing the DKI model. Rather than utilizing conventional unconstrained least squares (LS) methods, the tensor estimation problem is formulated as linearly constrained linear LS, where the constraints ensure physically and/or biologically plausible tensor estimates. The exact solution to the constrained problem is found via convex quadratic programming methods or, alternatively, an approximate solution is determined through a fast heuristic algorithm. The computationally more demanding quadratic programming-based method is more flexible, allowing for an arbitrary number of diffusion weightings and different gradient sets for each diffusion weighting. The heuristic algorithm is suitable for real-time settings such as on clinical scanners, where run time is crucial. The advantage offered by the proposed constrained algorithms is demonstrated using in vivo human brain images. The proposed constrained methods allow for shorter scan times and/or higher spatial resolution for a given fidelity of the DKI parametric maps. The second advancement increases the efficiency and accuracy of the estimation of mean and radial kurtoses by applying exact closed-form formulae. PMID:21337412
Bayesian phylogenetic estimation of fossil ages
Drummond, Alexei J.; Stadler, Tanja
2016-01-01
Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth–death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the ‘morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses. This article is part of the themed issue ‘Dating species divergences
Hallo, Miroslav; Asano, Kimiyuki; Gallovič, František
2017-09-01
On April 16, 2016, Kumamoto prefecture in Kyushu region, Japan, was devastated by a shallow M JMA7.3 earthquake. The series of foreshocks started by M JMA6.5 foreshock 28 h before the mainshock. They have originated in Hinagu fault zone intersecting the mainshock Futagawa fault zone; hence, the tectonic background for this earthquake sequence is rather complex. Here we infer centroid moment tensors (CMTs) for 11 events with M JMA between 4.8 and 6.5, using strong motion records of the K-NET, KiK-net and F-net networks. We use upgraded Bayesian full-waveform inversion code ISOLA-ObsPy, which takes into account uncertainty of the velocity model. Such an approach allows us to reliably assess uncertainty of the CMT parameters including the centroid position. The solutions show significant systematic spatial and temporal variations throughout the sequence. Foreshocks are right-lateral steeply dipping strike-slip events connected to the NE-SW shear zone. Those located close to the intersection of the Hinagu and Futagawa fault zones are dipping slightly to ESE, while those in the southern area are dipping to WNW. Contrarily, aftershocks are mostly normal dip-slip events, being related to the N-S extensional tectonic regime. Most of the deviatoric moment tensors contain only minor CLVD component, which can be attributed to the velocity model uncertainty. Nevertheless, two of the CMTs involve a significant CLVD component, which may reflect complex rupture process. Decomposition of those moment tensors into two pure shear moment tensors suggests combined right-lateral strike-slip and normal dip-slip mechanisms, consistent with the tectonic settings of the intersection of the Hinagu and Futagawa fault zones.[Figure not available: see fulltext.
Tensor completion for estimating missing values in visual data
Liu, Ji
2013-01-01
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between Fa
Bayesian Estimation of a Mixture Model
Directory of Open Access Journals (Sweden)
Ilhem Merah
2015-05-01
Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.
Bayesian approach to decompression sickness model parameter estimation.
Howle, L E; Weber, P W; Nichols, J M
2017-03-01
We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.
Single channel signal component separation using Bayesian estimation
Institute of Scientific and Technical Information of China (English)
Cai Quanwei; Wei Ping; Xiao Xianci
2007-01-01
A Bayesian estimation method to separate multicomponent signals with single channel observation is presented in this paper. By using the basis function projection, the component separation becomes a problem of limited parameter estimation. Then, a Bayesian model for estimating parameters is set up. The reversible jump MCMC (Monte Carlo Markov Chain) algorithmis adopted to perform the Bayesian computation. The method can jointly estimate the parameters of each component and the component number. Simulation results demonstrate that the method has low SNR threshold and better performance.
Nonparametric Bayesian drift estimation for multidimensional stochastic differential equations
Gugushvili, S.; Spreij, P.
2014-01-01
We consider nonparametric Bayesian estimation of the drift coefficient of a multidimensional stochastic differential equation from discrete-time observations on the solution of this equation. Under suitable regularity conditions, we establish posterior consistency in this context.
Bayesian methods to estimate urban growth potential
Smith, Jordan W.; Smart, Lindsey S.; Dorning, Monica; Dupéy, Lauren Nicole; Méley, Andréanne; Meentemeyer, Ross K.
2017-01-01
Urban growth often influences the production of ecosystem services. The impacts of urbanization on landscapes can subsequently affect landowners’ perceptions, values and decisions regarding their land. Within land-use and land-change research, very few models of dynamic landscape-scale processes like urbanization incorporate empirically-grounded landowner decision-making processes. Very little attention has focused on the heterogeneous decision-making processes that aggregate to influence broader-scale patterns of urbanization. We examine the land-use tradeoffs faced by individual landowners in one of the United States’ most rapidly urbanizing regions − the urban area surrounding Charlotte, North Carolina. We focus on the land-use decisions of non-industrial private forest owners located across the region’s development gradient. A discrete choice experiment is used to determine the critical factors influencing individual forest owners’ intent to sell their undeveloped properties across a series of experimentally varied scenarios of urban growth. Data are analyzed using a hierarchical Bayesian approach. The estimates derived from the survey data are used to modify a spatially-explicit trend-based urban development potential model, derived from remotely-sensed imagery and observed changes in the region’s socioeconomic and infrastructural characteristics between 2000 and 2011. This modeling approach combines the theoretical underpinnings of behavioral economics with spatiotemporal data describing a region’s historical development patterns. By integrating empirical social preference data into spatially-explicit urban growth models, we begin to more realistically capture processes as well as patterns that drive the location, magnitude and rates of urban growth.
Dimensionality reduction in Bayesian estimation algorithms
Directory of Open Access Journals (Sweden)
G. W. Petty
2013-03-01
Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.
A Modified Extended Bayesian Method for Parameter Estimation
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a modified extended Bayesian method for parameter estimation. In this method the mean value of the a priori estimation is taken from the values of the estimated parameters in the previous iteration step. In this way, the parameter covariance matrix can be automatically updated during the estimation procedure, thereby avoiding the selection of an empirical parameter. Because the extended Bayesian method can be regarded as a Tikhonov regularization, this new method is more stable than both the least-squares method and the maximum likelihood method. The validity of the proposed method is illustrated by two examples: one based on simulated data and one based on real engineering data.
Bayesian Estimation of Wave Spectra – Proper Formulation of ABIC
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2007-01-01
, a proper formulation of ABIC (a Bayesian Information Criterion) is given, in contrast to the improper formulation given of ABIC when only one hyperparameter is included. From a numerical example, the paper illustrates that the optimum pair of hyperparameters, determined by use of ABIC, corresponds......It is possible to estimate on-site wave spectra using measured ship responses applied to Bayesian Modelling based on two prior information: the wave spectrum must be smooth both directional-wise and frequency-wise. This paper introduces two hyperparameters into Bayesian Modelling and, hence...
Robust Bayesian Regularized Estimation Based on t Regression Model
Directory of Open Access Journals (Sweden)
Zean Li
2015-01-01
Full Text Available The t distribution is a useful extension of the normal distribution, which can be used for statistical modeling of data sets with heavy tails, and provides robust estimation. In this paper, in view of the advantages of Bayesian analysis, we propose a new robust coefficient estimation and variable selection method based on Bayesian adaptive Lasso t regression. A Gibbs sampler is developed based on the Bayesian hierarchical model framework, where we treat the t distribution as a mixture of normal and gamma distributions and put different penalization parameters for different regression coefficients. We also consider the Bayesian t regression with adaptive group Lasso and obtain the Gibbs sampler from the posterior distributions. Both simulation studies and real data example show that our method performs well compared with other existing methods when the error distribution has heavy tails and/or outliers.
Tensor based structure estimation in multi-channel images
DEFF Research Database (Denmark)
Schou, Jesper; Dierking, Wolfgang; Skriver, Henning
2000-01-01
. In the second part tensors are used for representing the structure information. This approach has the advantage, that tensors can be averaged either spatially or by applying several images, and the resulting tensor provides information of the average strength as well as orientation of the structure...
Estimating Tree Height-Diameter Models with the Bayesian Method
Directory of Open Access Journals (Sweden)
Xiongqing Zhang
2014-01-01
Full Text Available Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS and the maximum likelihood method (ML. The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Bayesian parameter estimation by continuous homodyne detection
DEFF Research Database (Denmark)
Kiilerich, Alexander Holm; Molmer, Klaus
2016-01-01
and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal......We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times...
Bayesian parameter estimation by continuous homodyne detection
Kiilerich, Alexander Holm; Mølmer, Klaus
2016-09-01
We simulate the process of continuous homodyne detection of the radiative emission from a quantum system, and we investigate how a Bayesian analysis can be employed to determine unknown parameters that govern the system evolution. Measurement backaction quenches the system dynamics at all times and we show that the ensuing transient evolution is more sensitive to system parameters than the steady state of the system. The parameter sensitivity can be quantified by the Fisher information, and we investigate numerically and analytically how the temporal noise correlations in the measurement signal contribute to the ultimate sensitivity limit of homodyne detection.
Bayesian and maximum likelihood estimation of genetic maps
DEFF Research Database (Denmark)
York, Thomas L.; Durrett, Richard T.; Tanksley, Steven;
2005-01-01
There has recently been increased interest in the use of Markov Chain Monte Carlo (MCMC)-based Bayesian methods for estimating genetic maps. The advantage of these methods is that they can deal accurately with missing data and genotyping errors. Here we present an extension of the previous methods...... that makes the Bayesian method applicable to large data sets. We present an extensive simulation study examining the statistical properties of the method and comparing it with the likelihood method implemented in Mapmaker. We show that the Maximum A Posteriori (MAP) estimator of the genetic distances...
Directory of Open Access Journals (Sweden)
Rania, M. Shalaby
2015-10-01
Full Text Available This paper deals with Bayesian and non-Bayesian methods for estimating parameters of the bivariate Pareto (BP distribution based on censored samples are considered with shape parameters λ and known scale parameter β. The maximum likelihood estimators MLE of the unknown parameters are derived. The Bayes estimators are obtained with respect to the squared error loss function and the prior distributions allow for prior dependence among the components of the parameter vector. .Posterior distributions for parameters of interest are derived and their properties are described. If the scale parameter is known, the Bayes estimators of the unknown parameters can be obtained in explicit forms under the assumptions of independent priors. An extensive computer simulation is used to compare the performance of the proposed estimators using MathCAD (14.
Bayesian parameter estimation for chiral effective field theory
Wesolowski, Sarah; Furnstahl, Richard; Phillips, Daniel; Klco, Natalie
2016-09-01
The low-energy constants (LECs) of a chiral effective field theory (EFT) interaction in the two-body sector are fit to observable data using a Bayesian parameter estimation framework. By using Bayesian prior probability distributions (pdfs), we quantify relevant physical expectations such as LEC naturalness and include them in the parameter estimation procedure. The final result is a posterior pdf for the LECs, which can be used to propagate uncertainty resulting from the fit to data to the final observable predictions. The posterior pdf also allows an empirical test of operator redundancy and other features of the potential. We compare results of our framework with other fitting procedures, interpreting the underlying assumptions in Bayesian probabilistic language. We also compare results from fitting all partial waves of the interaction simultaneously to cross section data compared to fitting to extracted phase shifts, appropriately accounting for correlations in the data. Supported in part by the NSF and DOE.
Introducing two hyperparameters in Bayesian estimation of wave spectra
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2008-01-01
ranges. From numerical simulations of stochastic response measurements, it is shown that the optimal hyperparameters, determined by use of ABIC (a Bayesian Information Criterion), correspond to the best estimate of the wave spectrum, which is not always the case when only one hyperparameter is included......An estimate of the on-site wave spectrum can be obtained from measured ship responses by use of Bayesian modelling, which means that the wave spectrum is found as the optimum solution from a probabilistic viewpoint. The paper describes the introduction of two hyperparameters into Bayesian modelling...... so that the prior information included in the modelling is based on two constraints: the wave spectrum must be smooth directional-wise as well as frequency-wise. Traditionally, only one hyperparameter has been used to control the amount of smoothing applied in both the frequency and directional...
Rediscovery of Good-Turing estimators via Bayesian nonparametrics.
Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye
2016-03-01
The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library.
Estimation of the magnetic field gradient tensor using the Swarm constellation
DEFF Research Database (Denmark)
Kotsiaros, Stavros; Finlay, Chris; Olsen, Nils
2014-01-01
For the first time, part of the magnetic field gradient tensor is estimated in space by the Swarm mission. We investigate the possibility of a more complete estimation of the gradient tensor exploiting the Swarm constellation. The East-West gradients can be approximated by observations from...
Quantum System Identification: Hamiltonian Estimation using Spectral and Bayesian Analysis
Schirmer, S G
2009-01-01
Identifying the Hamiltonian of a quantum system from experimental data is considered. General limits on the identifiability of model parameters with limited experimental resources are investigated, and a specific Bayesian estimation procedure is proposed and evaluated for a model system where a-priori information about the Hamiltonian's structure is available.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
Sequential Bayesian technique: An alternative approach for software reliability estimation
Indian Academy of Sciences (India)
S Chatterjee; S S Alam; R B Misra
2009-04-01
This paper proposes a sequential Bayesian approach similar to Kalman ﬁlter for estimating reliability growth or decay of software. The main advantage of proposed method is that it shows the variation of the parameter over a time, as new failure data become available. The usefulness of the method is demonstrated with some real life data
The Role of Parametric Assumptions in Adaptive Bayesian Estimation
Alcala-Quintana, Rocio; Garcia-Perez, Miguel A.
2004-01-01
Variants of adaptive Bayesian procedures for estimating the 5% point on a psychometric function were studied by simulation. Bias and standard error were the criteria to evaluate performance. The results indicated a superiority of (a) uniform priors, (b) model likelihood functions that are odd symmetric about threshold and that have parameter…
Introduction to applied Bayesian statistics and estimation for social scientists
Lynch, Scott M
2007-01-01
""Introduction to Applied Bayesian Statistics and Estimation for Social Scientists"" covers the complete process of Bayesian statistical analysis in great detail from the development of a model through the process of making statistical inference. The key feature of this book is that it covers models that are most commonly used in social science research - including the linear regression model, generalized linear models, hierarchical models, and multivariate regression models - and it thoroughly develops each real-data example in painstaking detail.The first part of the book provides a detailed
Seismic moment tensors and estimated uncertainties in southern Alaska
Silwal, Vipul; Tape, Carl
2016-04-01
We present a moment tensor catalog of 106 earthquakes in southern Alaska, and we perform a conceptually based uncertainty analysis for 21 of them. For each earthquake, we use both body waves and surface waves to do a grid search over double couple moment tensors and source depths in order to find the minimum of the misfit function. Our uncertainty parameter or, rather, our confidence parameter is the average value of the curve 𝒫 (V), where 𝒫 (V) is the posterior probability as a function of the fractional volume V of moment tensor space surrounding the minimum misfit moment tensor. As a supplemental means for characterizing and visualizing uncertainties, we generate moment tensor samples of the posterior probability. We perform a series of inversion tests to quantify the impact of certain decisions made within moment tensor inversions and to make comparisons with existing catalogs. For example, using an L1 norm in the misfit function provides more reliable solutions than an L2 norm, especially in cases when all available waveforms are used. Using body waves in addition to surface waves, as well as using more stations, leads to the most accurate moment tensor solutions.
Some Bayesian statistical techniques useful in estimating frequency and density
Johnson, D.H.
1977-01-01
This paper presents some elementary applications of Bayesian statistics to problems faced by wildlife biologists. Bayesian confidence limits for frequency of occurrence are shown to be generally superior to classical confidence limits. Population density can be estimated from frequency data if the species is sparsely distributed relative to the size of the sample plot. For other situations, limits are developed based on the normal distribution and prior knowledge that the density is non-negative, which insures that the lower confidence limit is non-negative. Conditions are described under which Bayesian confidence limits are superior to those calculated with classical methods; examples are also given on how prior knowledge of the density can be used to sharpen inferences drawn from a new sample.
A Bayesian estimation of the helioseismic solar age
Bonanno, Alfio
2015-01-01
The helioseismic determination of the solar age has been a subject of several studies because it provides us with an independent estimation of the age of the solar system. We present the Bayesian estimates of the helioseismic age of the Sun, which are determined by means of calibrated solar models that employ different equations of state and nuclear reaction rates. We use 17 frequency separation ratios $r_{02}(n)=(\
A tensor approach to the estimation of hydraulic conductivities in ...
African Journals Online (AJOL)
2006-07-03
Jul 3, 2006 ... The HC values computed from the data measured on the weathered or ... Keywords: hydraulic conductivity tensor, roughness, combined stress, hydraulic aperture, Table Mountain ... the anisotropic nature of studied media.
Bayesian estimation of the network autocorrelation model
Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.
2017-01-01
The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of
A comparison of the Bayesian and frequentist approaches to estimation
Samaniego, Francisco J
2010-01-01
This monograph contributes to the area of comparative statistical inference. Attention is restricted to the important subfield of statistical estimation. The book is intended for an audience having a solid grounding in probability and statistics at the level of the year-long undergraduate course taken by statistics and mathematics majors. The necessary background on Decision Theory and the frequentist and Bayesian approaches to estimation is presented and carefully discussed in Chapters 1-3. The 'threshold problem' - identifying the boundary between Bayes estimators which tend to outperform st
Sparse Bayesian learning for DOA estimation with mutual coupling.
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-10-16
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Sparse Bayesian Learning for DOA Estimation with Mutual Coupling
Directory of Open Access Journals (Sweden)
Jisheng Dai
2015-10-01
Full Text Available Sparse Bayesian learning (SBL has given renewed interest to the problem of direction-of-arrival (DOA estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs. Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Information flow among neural networks with Bayesian estimation
Institute of Scientific and Technical Information of China (English)
LI Yan; LI XiaoLi; OUYANG GaoXiang; GUAN XinPing
2007-01-01
Estimating the interaction among neural networks is an interesting issue in neuroscience. Some methods have been proposed to estimate the coupling strength among neural networks; however, few estimations of the coupling direction (information flow) among neural networks have been attempted. It is known that Bayesian estimator is based on a priori knowledge and a probability of event occurrence. In this paper, a new method is proposed to estimate coupling directions among neural networks with conditional mutual information that is estimated by Bayesian estimation. First, this method is applied to analyze the simulated EEG series generated by a nonlinear lumped-parameter model. In comparison with the conditional mutual information with Shannon entropy, it is found that this method is more successful in estimating the coupling direction, and is insensitive to the length of EEG series. Therefore, this method is suitable to analyze a short time series in practice. Second, we demonstrate how this method can be applied to the analysis of human intracranial epileptic electroencephalogram (EEG) recordings, and to indicate the coupling directions among neural networks. Therefore, this method helps to elucidate the epileptic focus localization.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Bayesian parameter estimation for nonlinear modelling of biological pathways
Directory of Open Access Journals (Sweden)
Ghasemi Omid
2011-12-01
Full Text Available Abstract Background The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. Results We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC method. We applied this approach to the biological pathways involved in the left ventricle (LV response to myocardial infarction (MI and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly
Collective animal behavior from Bayesian estimation and probability matching.
Directory of Open Access Journals (Sweden)
Alfonso Pérez-Escudero
2011-11-01
Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.
Bayesian estimation of generalized exponential distribution under noninformative priors
Moala, Fernando Antonio; Achcar, Jorge Alberto; Tomazella, Vera Lúcia Damasceno
2012-10-01
The generalized exponential distribution, proposed by Gupta and Kundu (1999), is a good alternative to standard lifetime distributions as exponential, Weibull or gamma. Several authors have considered the problem of Bayesian estimation of the parameters of generalized exponential distribution, assuming independent gamma priors and other informative priors. In this paper, we consider a Bayesian analysis of the generalized exponential distribution by assuming the conventional noninformative prior distributions, as Jeffreys and reference prior, to estimate the parameters. These priors are compared with independent gamma priors for both parameters. The comparison is carried out by examining the frequentist coverage probabilities of Bayesian credible intervals. We shown that maximal data information prior implies in an improper posterior distribution for the parameters of a generalized exponential distribution. It is also shown that the choice of a parameter of interest is very important for the reference prior. The different choices lead to different reference priors in this case. Numerical inference is illustrated for the parameters by considering data set of different sizes and using MCMC (Markov Chain Monte Carlo) methods.
Bayesian error estimation in density-functional theory
DEFF Research Database (Denmark)
Mortensen, Jens Jørgen; Kaasbjerg, Kristen; Frederiksen, Søren Lund
2005-01-01
We present a practical scheme for performing error estimates for density-functional theory calculations. The approach, which is based on ideas from Bayesian statistics, involves creating an ensemble of exchange-correlation functionals by comparing with an experimental database of binding energies...... for molecules and solids. Fluctuations within the ensemble can then be used to estimate errors relative to experiment on calculated quantities such as binding energies, bond lengths, and vibrational frequencies. It is demonstrated that the error bars on energy differences may vary by orders of magnitude...
Bayesian ensemble approach to error estimation of interatomic potentials
DEFF Research Database (Denmark)
Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.;
2004-01-01
Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...... of the actual errors for the potentials....
Bayesian Estimation of Small Effects in Exercise and Sports Science.
Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J
2016-01-01
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
The use of strain tensor to estimate thoracic tumors deformation
Energy Technology Data Exchange (ETDEWEB)
Michalski, Darek, E-mail: michalskid@upmc.edu; Huq, M. Saiful; Bednarz, Greg; Heron, Dwight E. [Department of Radiation Oncology, University of Pittsburgh Cancer Institute, Pittsburgh, Pennsylvania 15232 (United States)
2014-07-15
Purpose: Respiration-induced kinematics of thoracic tumors suggests a simple analogy with elasticity, where a strain tensor is used to characterize the volume of interests. The application of the biomechanical framework allows for the objective determination of tumor characteristics. Methods: Four-dimensional computed tomography provides the snapshots of the patient's anatomy at the end of inspiration and expiration. Image registration was used to obtain the displacement vector fields and deformation fields, which allows one for the determination of the strain tensor. Its departure from the identity matrix gauges the departure of the medium from rigidity. The tensorial characteristic of each GTV voxel was determined and averaged. To this end, the standard Euclidean matrix norm as well as the Log-Euclidean norm were employed. Tensorial anisotropy was gauged with the fractional anisotropy measure which is based on the normalized variance of the tensors eigenvalues. Anisotropy was also evaluated with the geodesic distance in the Log-Euclidean framework of a given strain tensor to its closest isotropic counterpart. Results: The averaged strain tensor was determined for each of the 15 retrospectively analyzed thoracic GTVs. The amplitude of GTV motion varied from 0.64 to 4.21 with the average of 1.20 cm. The GTV size ranged from 5.16 to 149.99 cc with the average of 43.19 cc. The tensorial analysis shows that deformation is inconsiderable and that the tensorial anisotropy is small. The Log-Euclidean distance of averaged strain tensors from the identity matrix ranged from 0.06 to 0.31 with the average of 0.19. The Frobenius distance from the identity matrix is similar and ranged from 0.06 to 0.35 with the average of 0.21. Their fractional anisotropy ranged from 0.02 to 0.12 with the average of 0.07. Their geodesic anisotropy ranged from 0.03 to 0.16 with the average of 0.09. These values also indicate insignificant deformation. Conclusions: The tensorial framework
Alvizuri, Celso; Silwal, Vipul; Krischer, Lion; Tape, Carl
2017-04-01
A seismic moment tensor is a 3 × 3 symmetric matrix that provides a compact representation of seismic events within Earth's crust. We develop an algorithm to estimate moment tensors and their uncertainties from observed seismic data. For a given event, the algorithm performs a grid search over the six-dimensional space of moment tensors by generating synthetic waveforms at each grid point and then evaluating a misfit function between the observed and synthetic waveforms. 'The' moment tensor M for the event is then the moment tensor with minimum misfit. To describe the uncertainty associated with M, we first convert the misfit function to a probability function. The uncertainty, or rather the confidence, is then given by the 'confidence curve' P(V ), where P(V ) is the probability that the true moment tensor for the event lies within the neighborhood of M that has fractional volume V . The area under the confidence curve provides a single, abbreviated 'confidence parameter' for M. We apply the method to data from events in different regions and tectonic settings: small (Mw 4) earthquakes in the southern Alaska subduction zone, and natural and man-made events at the Nevada Test Site. Moment tensor uncertainties allow us to better discriminate among moment tensor source types and to assign physical processes to the events.
Recursive camera-motion estimation with the trifocal tensor.
Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen; Or, Siu Hang
2006-10-01
In this paper, an innovative extended Kalman filter (EKF) algorithm for pose tracking using the trifocal tensor is proposed. In the EKF, a constant-velocity motion model is used as the dynamic system, and the trifocal-tensor constraint is incorporated into the measurement model. The proposed method has the advantages of those structure- and-motion-based approaches in that the pose sequence can be computed with no prior information on the scene structure. It also has the strengths of those model-based algorithms in which no updating of the three-dimensional (3-D) structure is necessary in the computation. This results in a stable, accurate, and efficient algorithm. Experimental results show that the proposed approach outperformed other existing EKFs that tackle the same problem. An extension to the pose-tracking algorithm has been made to demonstrate the application of the trifocal constraint to fast recursive 3-D structure recovery.
Adaptation of Tensor Voting to Image Structure Estimation
Moreno,Rodrigo; Pizarro, Luis; Burgeth, Bernhard; Weickert, Joachim; Garcia, Miguel Angel; Puig, Domenec
2012-01-01
Bringing together key researchers in disciplines ranging from visualization and image processing to applications in structural mechanics, fluid dynamics, elastography, and numerical mathematics, the workshop that generated this edited volume was the third in the successful Dagstuhl series. Its aim, reflected in the quality and relevance of the papers presented, was to foster collaboration and fresh lines of inquiry in the analysis and visualization of tensor fields, which offer a concise mode...
Regularized Positive-Definite Fourth Order Tensor Field Estimation from DW-MRI★
Barmpoutis, Angelos; Vemuri, Baba C.; Howland, Dena; Forder, John R.
2009-01-01
In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these tensors are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order tensors as ternary quartics and then apply Hilbert’s theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order tensor approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations. PMID:19063978
Regularized positive-definite fourth order tensor field estimation from DW-MRI.
Barmpoutis, Angelos; Hwang, Min Sig; Howland, Dena; Forder, John R; Vemuri, Baba C
2009-03-01
In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these tensors are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order tensors as ternary quartics and then apply Hilbert's theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order tensor approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations.
Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip.
Paesani, S; Gentile, A A; Santagati, R; Wang, J; Wiebe, N; Tew, D P; O'Brien, J L; Thompson, M G
2017-03-10
Quantum phase estimation is a fundamental subroutine in many quantum algorithms, including Shor's factorization algorithm and quantum simulation. However, so far results have cast doubt on its practicability for near-term, nonfault tolerant, quantum devices. Here we report experimental results demonstrating that this intuition need not be true. We implement a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a silicon quantum photonic device. The approach is verified to be well suited for prethreshold quantum processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. This shows a promising route to unlock the power of quantum phase estimation much sooner than previously believed.
Remote sensing image fusion based on Bayesian linear estimation
Institute of Scientific and Technical Information of China (English)
GE ZhiRong; WANG Bin; ZHANG LiMing
2007-01-01
A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spatial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM+ demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation,PCA-based method and wavelet-based method.
An assessment of Bayesian bias estimator for numerical weather prediction
Directory of Open Access Journals (Sweden)
J. Son
2008-12-01
Full Text Available Various statistical methods are used to process operational Numerical Weather Prediction (NWP products with the aim of reducing forecast errors and they often require sufficiently large training data sets. Generating such a hindcast data set for this purpose can be costly and a well designed algorithm should be able to reduce the required size of these data sets.
This issue is investigated with the relatively simple case of bias correction, by comparing a Bayesian algorithm of bias estimation with the conventionally used empirical method. As available forecast data sets are not large enough for a comprehensive test, synthetically generated time series representing the analysis (truth and forecast are used to increase the sample size. Since these synthetic time series retained the statistical characteristics of the observations and operational NWP model output, the results of this study can be extended to real observation and forecasts and this is confirmed by a preliminary test with real data.
By using the climatological mean and standard deviation of the meteorological variable in consideration and the statistical relationship between the forecast and the analysis, the Bayesian bias estimator outperforms the empirical approach in terms of the accuracy of the estimated bias, and it can reduce the required size of the training sample by a factor of 3. This advantage of the Bayesian approach is due to the fact that it is less liable to the sampling error in consecutive sampling. These results suggest that a carefully designed statistical procedure may reduce the need for the costly generation of large hindcast datasets.
BAYESIAN ESTIMATION OF ERLANG DISTRIBUTION UNDER DIFFERENT PRIOR DISTRIBUTIONS
Directory of Open Access Journals (Sweden)
Abdul Haq
2011-01-01
Full Text Available This paper addresses the problem of Bayesian estimation of the parameters of Erlangdistribution under squared error loss function by assuming different independent informativepriors as well as joint priors for both shape and scale parameters. The motivation is to explore themost appropriate prior for Erlang distribution among different priors. A comparison of the Bayesestimates and their risks for different choices of the values of the hyperparameters is alsopresented. Finally, we illustrate the results using a simulation study as well as by doing real dataanalysis.
A Bayesian-style approach to estimating LISA science capability
Baker, John; Marsat, Sylvain
2017-01-01
A full understanding of LISA's science capability will require accurate models of incident waveform signals and the instrumental response. While Fisher matrix analysis is useful for some estimates, a Bayesian characterization of simulated probability distributions is needed for understanding important cases at the limit of LISA's capability. We apply fast analysis algorithms enabling accurate treatment using EOB waveforms with relevant higher modes and the full-featured LISA response to study these aspects of LISA science capability. Supported by NASA grant 11-ATP-046.
Bayesian estimation of keyword confidence in Chinese continuous speech recognition
Institute of Scientific and Technical Information of China (English)
HAO Jie; LI Xing
2003-01-01
In a syllable-based speaker-independent Chinese continuous speech recognition system based on classical Hidden Markov Model (HMM), a Bayesian approach of keyword confidence estimation is studied, which utilizes both acoustic layer scores and syllable-based statistical language model (LM) score. The Maximum a posteriori (MAP) confidence measure is proposed, and the forward-backward algorithm calculating the MAP confidence scores is deduced. The performance of the MAP confidence measure is evaluated in keyword spotting application and the experiment results show that the MAP confidence scores provide high discriminability for keyword candidates. Furthermore, the MAP confidence measure can be applied to various speech recognition applications.
Bayesian Estimation for the Order of INAR(q) Mo del
Institute of Scientific and Technical Information of China (English)
Miao Guan-hong; Wang De-hui
2016-01-01
In this paper, we consider the problem of determining the order of INAR(q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.
Bayesian estimation of animal movement from archival and satellite tags.
Directory of Open Access Journals (Sweden)
Michael D Sumner
Full Text Available The reliable estimation of animal location, and its associated error is fundamental to animal ecology. There are many existing techniques for handling location error, but these are often ad hoc or are used in isolation from each other. In this study we present a Bayesian framework for determining location that uses all the data available, is flexible to all tagging techniques, and provides location estimates with built-in measures of uncertainty. Bayesian methods allow the contributions of multiple data sources to be decomposed into manageable components. We illustrate with two examples for two different location methods: satellite tracking and light level geo-location. We show that many of the problems with uncertainty involved are reduced and quantified by our approach. This approach can use any available information, such as existing knowledge of the animal's potential range, light levels or direct location estimates, auxiliary data, and movement models. The approach provides a substantial contribution to the handling uncertainty in archival tag and satellite tracking data using readily available tools.
Bayesian-Pearson Divergence Estimator Based on Grouped Data
Institute of Scientific and Technical Information of China (English)
BaoxueZhang; QingxunMeng
2004-01-01
A new method along with Bayesian approach for estimating the parameter in the distribution function F(x; θ) by using grouped data is developed in this paper. The support of F(x;θ) is divided into disjointed intervals as -∞ = x0 < x1 <… < xk-1
MAP estimators and their consistency in Bayesian nonparametric inverse problems
Dashti, M.; Law, K. J. H.; Stuart, A. M.; Voss, J.
2013-09-01
We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map {G} applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of {G}(u) can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics.
MAP estimators and their consistency in Bayesian nonparametric inverse problems
Dashti, M.
2013-09-01
We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map applied to u. We adopt a Bayesian approach to the problem and work in a setting where the prior measure is specified as a Gaussian random field μ0. We work under a natural set of conditions on the likelihood which implies the existence of a well-posed posterior measure, μy. Under these conditions, we show that the maximum a posteriori (MAP) estimator is well defined as the minimizer of an Onsager-Machlup functional defined on the Cameron-Martin space of the prior; thus, we link a problem in probability with a problem in the calculus of variations. We then consider the case where the observational noise vanishes and establish a form of Bayesian posterior consistency for the MAP estimator. We also prove a similar result for the case where the observation of can be repeated as many times as desired with independent identically distributed noise. The theory is illustrated with examples from an inverse problem for the Navier-Stokes equation, motivated by problems arising in weather forecasting, and from the theory of conditioned diffusions, motivated by problems arising in molecular dynamics. © 2013 IOP Publishing Ltd.
Bayesian estimation of HIV-1 dynamics in vivo.
Ushakova, Anastasia; Pettersen, Frank Olav; Mæland, Arild; Lindqvist, Bo Henry; Kvale, Dag
2015-03-01
Statistical analysis of viral dynamics in HIV-1 infected patients undergoing structured treatment interruptions were performed using a novel model that accounts for treatment efficiency as well as total CD8+ T cell counts. A brief review of parameter estimates obtained in other studies is given, pointing to a considerable variation in the estimated values. A Bayesian approach to parameter estimation was used with longitudinal measurements of CD4+ and CD8+ T cell counts and HIV RNA. We describe an estimation procedure which uses spline approximations of CD8+ T cells dynamics. This approach reduces the number of parameters that must be estimated and is especially helpful when the CD8+ T cells growth function has a delayed dependence on the past. Seven important parameters related to HIV-1 in-host dynamics were estimated, most of them treated as global parameters across the group of patients. The estimated values were mainly in keeping with the estimates obtained in other reports, but our paper also introduces the estimates of some new parameters which supplement the current knowledge. The method was also tested on a simulated data set.
BAYESIAN ESTIMATION IN SHARED COMPOUND POISSON FRAILTY MODELS
Directory of Open Access Journals (Sweden)
David D. Hanagal
2015-06-01
Full Text Available In this paper, we study the compound Poisson distribution as the shared frailty distribution and two different baseline distributions namely Pareto and linear failure rate distributions for modeling survival data. We are using the Markov Chain Monte Carlo (MCMC technique to estimate parameters of the proposed models by introducing the Bayesian estimation procedure. In the present study, a simulation is done to compare the true values of parameters with the estimated values. We try to fit the proposed models to a real life bivariate survival data set of McGrilchrist and Aisbett (1991 related to kidney infection. Also, we present a comparison study for the same data by using model selection criterion, and suggest a better frailty model out of two proposed frailty models.
A Bayesian approach to estimating the prehepatic insulin secretion rate
DEFF Research Database (Denmark)
Andersen, Kim Emil; Højbjerre, Malene
the time courses of insulin and C-peptide subsequently are used as known forcing functions. In this work we adopt a Bayesian graphical model to describe the unied model simultaneously. We develop a model that also accounts for both measurement error and process variability. The parameters are estimated......The prehepatic insulin secretion rate of the pancreatic $beta$-cells is not directly measurable, since part of the secreted insulin is absorbed by the liver prior to entering the blood stream. However, C-peptide is co-secreted equimolarly and is not absorbed by the liver, allowing...... for the estimation of the prehepatic insulin secretion rate. We consider a stochastic differential equation model that combines both insulin and C-peptide concentrations in plasma to estimate the prehepatic insulin secretion rate. Previously this model has been analysed in an iterative deterministic set-up, where...
Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates
Directory of Open Access Journals (Sweden)
Piotr Białowolski
2012-03-01
Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period. Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.
Theoretical estimate on tensor-polarization asymmetry in proton-deuteron Drell-Yan process
Kumano, S.; Song, Qin-Tao
2016-09-01
Tensor-polarized parton distribution functions are new quantities in spin-1 hadrons such as the deuteron, and they could probe new quark-gluon dynamics in hadron and nuclear physics. In charged-lepton deep inelastic scattering, they are studied by the twist-2 structure functions b1 and b2. The HERMES Collaboration found unexpectedly large b1 values compared to a naive theoretical expectation based on the standard deuteron model. The situation should be significantly improved in the near future by an approved experiment to measure b1 at Thomas Jefferson National Accelerator Facility (JLab). There is also an interesting indication in the HERMES result that finite antiquark tensor polarization exists. It could play an important role in solving a mechanism on tensor structure in the quark-gluon level. The tensor-polarized antiquark distributions are not easily determined from the charged-lepton deep inelastic scattering; however, they can be measured in a proton-deuteron Drell-Yan process with a tensor-polarized deuteron target. In this article, we estimate the tensor-polarization asymmetry for a possible Fermilab Main-Injector experiment by using optimum tensor-polarized parton distribution functions to explain the HERMES measurement. We find that the asymmetry is typically a few percent. If it is measured, it could probe new hadron physics, and such studies could create an interesting field of high-energy spin physics. In addition, we find that a significant tensor-polarized gluon distribution should exist due to Q2 evolution, even if it were zero at a low Q2 scale. The tensor-polarized gluon distribution has never been observed, so it is an interesting future project.
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Root Sparse Bayesian Learning for Off-Grid DOA Estimation
Dai, Jisheng; Bao, Xu; Xu, Weichao; Chang, Chunqi
2017-01-01
The performance of the existing sparse Bayesian learning (SBL) methods for off-gird DOA estimation is dependent on the trade off between the accuracy and the computational workload. To speed up the off-grid SBL method while remain a reasonable accuracy, this letter describes a computationally efficient root SBL method for off-grid DOA estimation, where a coarse refinable grid, whose sampled locations are viewed as the adjustable parameters, is adopted. We utilize an expectation-maximization (EM) algorithm to iteratively refine this coarse grid, and illustrate that each updated grid point can be simply achieved by the root of a certain polynomial. Simulation results demonstrate that the computational complexity is significantly reduced and the modeling error can be almost eliminated.
Low-Complexity Bayesian Estimation of Cluster-Sparse Channels
Ballal, Tarig
2015-09-18
This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexity minimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that due to the banded Toeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number of orthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response and the structure of the measurement matrix, all combined, result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused in different clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussian channels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selected benchmark methods in systems with preamble-based training signals transmitted over clustersparse channels.
Bayesian fusion algorithm for improved oscillometric blood pressure estimation.
Forouzanfar, Mohamad; Dajani, Hilmi R; Groza, Voicu Z; Bolic, Miodrag; Rajan, Sreeraman; Batkin, Izmail
2016-11-01
A variety of oscillometric algorithms have been recently proposed in the literature for estimation of blood pressure (BP). However, these algorithms possess specific strengths and weaknesses that should be taken into account before selecting the most appropriate one. In this paper, we propose a fusion method to exploit the advantages of the oscillometric algorithms and circumvent their limitations. The proposed fusion method is based on the computation of the weighted arithmetic mean of the oscillometric algorithms estimates, and the weights are obtained using a Bayesian approach by minimizing the mean square error. The proposed approach is used to fuse four different oscillometric blood pressure estimation algorithms. The performance of the proposed method is evaluated on a pilot dataset of 150 oscillometric recordings from 10 subjects. It is found that the mean error and standard deviation of error are reduced relative to the individual estimation algorithms by up to 7 mmHg and 3 mmHg in estimation of systolic pressure, respectively, and by up to 2 mmHg and 3 mmHg in estimation of diastolic pressure, respectively.
Analysis of Wave Directional Spreading by Bayesian Parameter Estimation
Institute of Scientific and Technical Information of China (English)
钱桦; 莊士贤; 高家俊
2002-01-01
A spatial array of wave gauges installed on an observatoion platform has been designed and arranged to measure the lo-cal features of winter monsoon directional waves off Taishi coast of Taiwan. A new method, named the Bayesian ParameterEstimation Method( BPEM), is developed and adopted to determine the main direction and the directional spreading parame-ter of directional spectra. The BPEM could be considered as a regression analysis to find the maximum joint probability ofparameters, which best approximates the observed data from the Bayesian viewpoint. The result of the analysis of field wavedata demonstrates the highly dependency of the characteristics of normalized directional spreading on the wave age. The Mit-suyasu type empirical formula of directional spectnun is therefore modified to be representative of monsoon wave field. More-over, it is suggested that Smax could be expressed as a function of wave steepness. The values of Smax decrease with increas-ing steepness. Finally, a local directional spreading model, which is simple to be utilized in engineering practice, is prop-osed.
Estimating parameters in stochastic systems: A variational Bayesian approach
Vrettas, Michail D.; Cornford, Dan; Opper, Manfred
2011-11-01
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Optimal Bayesian experimental design for contaminant transport parameter estimation
Tsilifis, Panagiotis; Hajali, Paris
2015-01-01
Experimental design is crucial for inference where limitations in the data collection procedure are present due to cost or other restrictions. Optimal experimental designs determine parameters that in some appropriate sense make the data the most informative possible. In a Bayesian setting this is translated to updating to the best possible posterior. Information theoretic arguments have led to the formation of the expected information gain as a design criterion. This can be evaluated mainly by Monte Carlo sampling and maximized by using stochastic approximation methods, both known for being computationally expensive tasks. We propose an alternative framework where a lower bound of the expected information gain is used as the design criterion. In addition to alleviating the computational burden, this also addresses issues concerning estimation bias. The problem of permeability inference in a large contaminated area is used to demonstrate the validity of our approach where we employ the massively parallel vers...
Application of Bayesian Networks for Estimation of Individual Psychological Characteristics
Litvinenko, Alexander
2017-07-19
In this paper we apply Bayesian networks for developing more accurate final overall estimations of psychological characteristics of an individual, based on psychological test results. Psychological tests which identify how much an individual possesses a certain factor are very popular and quite common in the modern world. We call this value for a given factor -- the final overall estimation. Examples of factors could be stress resistance, the readiness to take a risk, the ability to concentrate on certain complicated work and many others. An accurate qualitative and comprehensive assessment of human potential is one of the most important challenges in any company or collective. The most common way of studying psychological characteristics of each single person is testing. Psychologists and sociologists are constantly working on improvement of the quality of their tests. Despite serious work, done by psychologists, the questions in tests often do not produce enough feedback due to the use of relatively poor estimation systems. The overall estimation is usually based on personal experiences and the subjective perception of a psychologist or a group of psychologists about the investigated psychological personality factors.
Finding Clocks in Genes: A Bayesian Approach to Estimate Periodicity
Directory of Open Access Journals (Sweden)
Yan Ren
2016-01-01
Full Text Available Identification of rhythmic gene expression from metabolic cycles to circadian rhythms is crucial for understanding the gene regulatory networks and functions of these biological processes. Recently, two algorithms, JTK_CYCLE and ARSER, have been developed to estimate periodicity of rhythmic gene expression. JTK_CYCLE performs well for long or less noisy time series, while ARSER performs well for detecting a single rhythmic category. However, observing gene expression at high temporal resolution is not always feasible, and many scientists are interested in exploring both ultradian and circadian rhythmic categories simultaneously. In this paper, a new algorithm, named autoregressive Bayesian spectral regression (ABSR, is proposed. It estimates the period of time-course experimental data and classifies gene expression profiles into multiple rhythmic categories simultaneously. Through the simulation studies, it is shown that ABSR substantially improves the accuracy of periodicity estimation and clustering of rhythmic categories as compared to JTK_CYCLE and ARSER for the data with low temporal resolution. Moreover, ABSR is insensitive to rhythmic patterns. This new scheme is applied to existing time-course mouse liver data to estimate period of rhythms and classify the genes into ultradian, circadian, and arrhythmic categories. It is observed that 49.2% of the circadian profiles detected by JTK_CYCLE with 1-hour resolution are also detected by ABSR with only 4-hour resolution.
Point and Interval Estimation on the Degree and the Angle of Polarization. A Bayesian approach
Maier, Daniel; Santangelo, Andrea
2014-01-01
Linear polarization measurements provide access to two quantities, the degree (DOP) and the angle of polarization (AOP). The aim of this work is to give a complete and concise overview of how to analyze polarimetric measurements. We review interval estimations for the DOP with a frequentist and a Bayesian approach. Point estimations for the DOP and interval estimations for the AOP are further investigated with a Bayesian approach to match observational needs. Point and interval estimations are calculated numerically for frequentist and Bayesian statistics. Monte Carlo simulations are performed to clarify the meaning of the calculations. Under observational conditions, the true DOP and AOP are unknown, so that classical statistical considerations - based on true values - are not directly usable. In contrast, Bayesian statistics handles unknown true values very well and produces point and interval estimations for DOP and AOP, directly. Using a Bayesian approach, we show how to choose DOP point estimations based...
Poster: Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
Christensen, Sabrina Tang
We present a comparison of two global digital algorithms for estimation of Minkowski tensors of sets in dimension two with positive reach given only a digitisation of the set. We give recommendations for the choice of variables utilised by the algorithms and examine accuracy of the latter...
Reliable dual tensor model estimation in single and crossing fibers based on jeffreys prior
J. Yang (Jianfei); D.H.J. Poot; M.W.A. Caan (Matthan); Su, T. (Tanja); C.B. Majoie (Charles); L.J. van Vliet (Lucas); F. Vos (Frans)
2016-01-01
textabstractPurpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD).
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
Bayesian hierarchical grouping: Perceptual grouping as mixture estimation.
Froyen, Vicky; Feldman, Jacob; Singh, Manish
2015-10-01
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian hierarchical grouping (BHG). In BHG, we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are "owned" by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz.
Institute of Scientific and Technical Information of China (English)
GaoChunwen; XuJingzhen; RichardSinding-Larsen
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Estimate on Spin Asymmetry for Drell-Yan Process at Fermilab with Tensor-Polarized Deuteron
Kumano, S
2016-01-01
There are four new structure functions for the spin-1 deuteron in comparison with the ones for the spin-1/2 proton, and they are called $b_1$, $b_2$, $b_3$, and $b_4$. The twist-2 structure functions $b_1$ and $b_2$ are expressed by tensor-polarized parton distribution functions in the deuteron. HERMES measurements of $b_1$ are much different from the prediction of the standard deuteron model with D-state admixture. It indicates that the structure functions $b_1$ and $b_2$ probe an interesting new aspect in the deuteron. There is an approved experiment at JLab to measure $b_1$ and it is expected to start in 2019. On the other hand, the measurement of tensor-polarized distributions is under consideration at Fermilab by the Drell-Yan process with the unpolarized proton beam and tensor-polarized deuteron target. It is expected to provide crucial information on tensor-polarized antiquark distributions. Since the distributions are small quantities, it is important to estimate the tensor-polarized spin asymmetry th...
Bridging data across studies using frequentist and Bayesian estimation.
Zhang, Teng; Lipkovich, Ilya; Marchenko, Olga
2017-01-01
In drug development programs, an experimental treatment is evaluated across different populations and/or disease types using multiple studies conducted in countries around the world. In order to show the efficacy and safety in a specific population, a bridging study may be required. There are therapeutic areas for which enrolling patients to a trial is very challenging. Therefore, it is of interest to utilize the available historical information from previous studies. However, treatment effect may vary across different subpopulations/disease types; therefore, directly utilizing outcomes from historical studies may result in a biased estimation of treatment effect under investigation in the target trial. In this article, we propose novel approaches using both frequentist and Bayesian frameworks that allow borrowing information from historical studies while accounting for relevant patient's covariates via a propensity-based weighting. We evaluate the operating characteristics of the proposed methods in a simulation study and demonstrate that under certain conditions these methods may lead to improved estimation of a treatment effect.
Bayesian Approaches to Imputation, Hypothesis Testing, and Parameter Estimation
Ross, Steven J.; Mackey, Beth
2015-01-01
This chapter introduces three applications of Bayesian inference to common and novel issues in second language research. After a review of the critiques of conventional hypothesis testing, our focus centers on ways Bayesian inference can be used for dealing with missing data, for testing theory-driven substantive hypotheses without a default null…
Transmit Array Interpolation for DOA Estimation via Tensor Decomposition in 2-D MIMO Radar
Cao, Ming-Yang; Vorobyov, Sergiy A.; Hassanien, Aboulnasr
2017-10-01
In this paper, we propose a two-dimensional (2D) joint transmit array interpolation and beamspace design for planar array mono-static multiple-input-multiple-output (MIMO) radar for direction-of-arrival (DOA) estimation via tensor modeling. Our underlying idea is to map the transmit array to a desired array and suppress the transmit power outside the spatial sector of interest. In doing so, the signal-tonoise ratio is improved at the receive array. Then, we fold the received data along each dimension into a tensorial structure and apply tensor-based methods to obtain DOA estimates. In addition, we derive a close-form expression for DOA estimation bias caused by interpolation errors and argue for using a specially designed look-up table to compensate the bias. The corresponding Cramer-Rao Bound (CRB) is also derived. Simulation results are provided to show the performance of the proposed method and compare its performance to CRB.
Bayesian Approach in Estimation of Scale Parameter of Nakagami Distribution
Directory of Open Access Journals (Sweden)
Azam Zaka
2014-08-01
Full Text Available Normal 0 false false false EN-US X-NONE X-NONE Nakagami distribution is a flexible life time distribution that may offer a good fit to some failure data sets. It has applications in attenuation of wireless signals traversing multiple paths, deriving unit hydrographs in hydrology, medical imaging studies etc. In this research, we obtain Bayesian estimators of the scale parameter of Nakagami distribution. For the posterior distribution of this parameter, we consider Uniform, Inverse Exponential and Levy priors. The three loss functions taken up are Squared Error Loss function, Quadratic Loss Function and Precautionary Loss function. The performance of an estimator is assessed on the basis of its relative posterior risk. Monte Carlo Simulations are used to compare the performance of the estimators. It is discovered that the PLF produces the least posterior risk when uniform priors is used. SELF is the best when inverse exponential and Levy Priors are used. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}
Macnab, Ying C
2009-04-30
This paper presents Bayesian multivariate disease mapping and ecological regression models that take into account errors in covariates. Bayesian hierarchical formulations of multivariate disease models and covariate measurement models, with related methods of estimation and inference, are developed as an integral part of a Bayesian disability adjusted life years (DALYs) methodology for the analysis of multivariate disease or injury data and associated ecological risk factors and for small area DALYs estimation, inference, and mapping. The methodology facilitates the estimation of multivariate small area disease and injury rates and associated risk effects, evaluation of DALYs and 'preventable' DALYs, and identification of regions to which disease or injury prevention resources may be directed to reduce DALYs. The methodology interfaces and intersects the Bayesian disease mapping methodology and the global burden of disease framework such that the impact of disease, injury, and risk factors on population health may be evaluated to inform community health, health needs, and priority considerations for disease and injury prevention. A burden of injury study on road traffic accidents in local health areas in British Columbia, Canada, is presented as an illustrative example.
Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management
A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...
Efficient Bayesian Learning in Social Networks with Gaussian Estimators
Mossel, Elchanan
2010-01-01
We propose a simple and efficient Bayesian model of iterative learning on social networks. This model is efficient in two senses: the process both results in an optimal belief, and can be carried out with modest computational resources for large networks. This result extends Condorcet's Jury Theorem to general social networks, while preserving rationality and computational feasibility. The model consists of a group of agents who belong to a social network, so that a pair of agents can observe each other's actions only if they are neighbors. We assume that the network is connected and that the agents have full knowledge of the structure of the network. The agents try to estimate some state of the world S (say, the price of oil a year from today). Each agent has a private measurement of S. This is modeled, for agent v, by a number S_v picked from a Gaussian distribution with mean S and standard deviation one. Accordingly, agent v's prior belief regarding S is a normal distribution with mean S_v and standard dev...
Bayesian estimation of the hemodynamic response function in functional MRI
Marrelec, G.; Benali, H.; Ciuciu, P.; Poline, J.-B.
2002-05-01
Functional MRI (fMRI) is a recent, non-invasive technique allowing for the evolution of brain processes to be dynamically followed in various cognitive or behavioral tasks. In BOLD fMRI, what is actually measured is only indirectly related to neuronal activity through a process that is still under investigation. A convenient way to analyze BOLD fMRI data consists of considering the whole brain as a system characterized by a transfer response function, called the Hemodynamic Response Function (HRF). Precise and robust estimation of the HRF has not been achieved yet: parametric methods tend to be robust but require too strong constraints on the shape of the HRF, whereas non-parametric models are not reliable since the problem is badly conditioned. We therefore propose a full Bayesian, non-parametric method that makes use of basic but relevant a priori knowledge about the underlying physiological process to make robust inference about the HRF. We show that this model is very robust to decreasing signal-to-noise ratio and to the actual noise sampling distribution. We finally apply the method to real data, revealing a wide variety of HRF shapes.
Estimating seabed scattering mechanisms via Bayesian model selection.
Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan
2014-10-01
A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur.
Clement-Spychala, Meagan E; Couper, David; Zhu, Hongtu; Muller, Keith E
2010-01-01
The diffusion tensor imaging (DTI) protocol characterizes diffusion anisotropy locally in space, thus providing rich detail about white matter tissue structure. Although useful metrics for diffusion tensors have been defined, statistical properties of the measures have been little studied. Assuming homogeneity within a region leads to being able to apply Wishart distribution theory. First, it will be shown that common DTI metrics are simple functions of known test statistics. The average diffusion coefficient (ADC) corresponds to the trace of a Wishart, and is also described as the generalized (multivariate) variance, the average variance of the principal components. Therefore ADC has a known exact distribution (a positively weighted quadratic form in Gaussians) as well as a simple and accurate approximation (Satterthwaite) in terms of a scaled chi square. Of particular interest is that fractional anisotropy (FA) values for given regions of interest are functions of the Geisser-Greenhouse (GG) sphericity estimator. The GG sphericity estimator can be approximated well by a linear transformation of a squared beta random variable. Simulated data demonstrates that the fits work well for simulated diffusion tensors. Applying traditional density estimation techniques for a beta to histograms of FA values from a region allow representing the histogram of hundreds or thousands of values in terms of just two estimates for the beta parameters. Thus using the approximate distribution eliminates the "curse of dimensionality" for FA values. A parallel result holds for ADC.
Ely, Gregory
2013-01-01
In this paper we present a novel technique for micro-seismic localization using a group sparse penalization that is robust to the focal mechanism of the source and requires only a velocity model of the stratigraphy rather than a full Green's function model of the earth's response. In this technique we construct a set of perfect delta detector responses, one for each detector in the array, to a seismic event at a given location and impose a group sparsity across the array. This scheme is independent of the moment tensor and exploits the time compactness of the incident seismic signal. Furthermore we present a method for improving the inversion of the moment tensor and Green's function when the geometry of seismic array is limited. In particular we demonstrate that both Tikhonov regularization and truncated SVD can improve the recovery of the moment tensor and be robust to noise. We evaluate our algorithm on synthetic data and present error bounds for both estimation of the moment tensor as well as localization...
Combining the boundary shift integral and tensor-based morphometry for brain atrophy estimation
Michalkiewicz, Mateusz; Pai, Akshay; Leung, Kelvin K.; Sommer, Stefan; Darkner, Sune; Sørensen, Lauge; Sporring, Jon; Nielsen, Mads
2016-03-01
Brain atrophy from structural magnetic resonance images (MRIs) is widely used as an imaging surrogate marker for Alzheimers disease. Their utility has been limited due to the large degree of variance and subsequently high sample size estimates. The only consistent and reasonably powerful atrophy estimation methods has been the boundary shift integral (BSI). In this paper, we first propose a tensor-based morphometry (TBM) method to measure voxel-wise atrophy that we combine with BSI. The combined model decreases the sample size estimates significantly when compared to BSI and TBM alone.
Bayesian estimation of regularization parameters for deformable surface models
Energy Technology Data Exchange (ETDEWEB)
Cunningham, G.S.; Lehovich, A.; Hanson, K.M.
1999-02-20
In this article the authors build on their past attempts to reconstruct a 3D, time-varying bolus of radiotracer from first-pass data obtained by the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest total artificial heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tubes. The model for the radiotracer distribution at a given time is a closed surface parameterized by 482 vertices that are connected to make 960 triangles, with nonuniform intensity variations of radiotracer allowed inside the surface on a voxel-to-voxel basis. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework, as is the weighted norm of the gradient of the voxellated grid. MAP estimates for the vertices, interior intensity voxels and background count level are produced. The strength of the priors, or hyperparameters, are determined by maximizing the probability of the data given the hyperparameters, called the evidence. The evidence is calculated by first assuming that the posterior is approximately normal in the values of the vertices and voxels, and then by evaluating the integral of the multi-dimensional normal distribution. This integral (which requires evaluating the determinant of a covariance matrix) is computed by applying a recent algorithm from Bai et. al. that calculates the needed determinant efficiently. They demonstrate that the radiotracer is highly inhomogeneous in early time frames, as suspected in earlier reconstruction attempts that assumed a uniform intensity of radiotracer within the closed surface, and that the optimal choice of hyperparameters is substantially different for different time frames.
Liu, Kai; Cui, Meng-Ying; Cao, Peng; Wang, Jiang-Bo
2016-01-01
On urban arterials, travel time estimation is challenging especially from various data sources. Typically, fusing loop detector data and probe vehicle data to estimate travel time is a troublesome issue while considering the data issue of uncertain, imprecise and even conflicting. In this paper, we propose an improved data fusing methodology for link travel time estimation. Link travel times are simultaneously pre-estimated using loop detector data and probe vehicle data, based on which Bayesian fusion is then applied to fuse the estimated travel times. Next, Iterative Bayesian estimation is proposed to improve Bayesian fusion by incorporating two strategies: 1) substitution strategy which replaces the lower accurate travel time estimation from one sensor with the current fused travel time; and 2) specially-designed conditions for convergence which restrict the estimated travel time in a reasonable range. The estimation results show that, the proposed method outperforms probe vehicle data based method, loop detector based method and single Bayesian fusion, and the mean absolute percentage error is reduced to 4.8%. Additionally, iterative Bayesian estimation performs better for lighter traffic flows when the variability of travel time is practically higher than other periods.
Rainfall estimation using raingages and radar — A Bayesian approach: 1. Derivation of estimators
Seo, D.-J.; Smith, J. A.
1991-03-01
Procedures for estimating rainfall from radar and raingage observations are constructed in a Bayesian framework. Given that the number of raingage measurements is typically very small, mean and variance of gage rainfall are treated as uncertain parameters. Under the assumption that log gage rainfall and log radar rainfall are jointly multivariate normal, the estimation problem is equivalent to lognormal co-kriging with uncertain mean and variance of the gage rainfall field. The posterior distribution is obtained under the assumption that the prior for the mean and inverse of the variance of log gage rainfall is normal-gamma 2. Estimate and estimation variance do not have closed-form expressions, but can be easily evaluated by numerically integrating two single integrals. To reduce computational burden associated with evaluating sufficient statistics for the likelihood function, an approximate form of parameter updating is given. Also, as a further approximation, the parameters are updated using raingage measurements only, yielding closed-form expressions for estimate and estimation variance in the Gaussian domain. With a reduction in the number of radar rainfall data in constructing covariance matrices, computational requirements for the estimation procedures are not significantly greater than those for simple co-kriging. Given their generality, the estimation procedures constructed in this work are considered to be applicable in various estimation problems involving an undersampled main variable and a densely sampled auxiliary variable.
Non-Parametric Bayesian State Space Estimator for Negative Information
Directory of Open Access Journals (Sweden)
Guillaume de Chambrier
2017-09-01
Full Text Available Simultaneous Localization and Mapping (SLAM is concerned with the development of filters to accurately and efficiently infer the state parameters (position, orientation, etc. of an agent and aspects of its environment, commonly referred to as the map. A mapping system is necessary for the agent to achieve situatedness, which is a precondition for planning and reasoning. In this work, we consider an agent who is given the task of finding a set of objects. The agent has limited perception and can only sense the presence of objects if a direct contact is made, as a result most of the sensing is negative information. In the absence of recurrent sightings or direct measurements of objects, there are no correlations from the measurement errors that can be exploited. This renders SLAM estimators, for which this fact is their backbone such as EKF-SLAM, ineffective. In addition for our setting, no assumptions are taken with respect to the marginals (beliefs of both the agent and objects (map. From the loose assumptions we stipulate regarding the marginals and measurements, we adopt a histogram parametrization. We introduce a Bayesian State Space Estimator (BSSE, which we name Measurement Likelihood Memory Filter (MLMF, in which the values of the joint distribution are not parametrized but instead we directly apply changes from the measurement integration step to the marginals. This is achieved by keeping track of the history of likelihood functions’ parameters. We demonstrate that the MLMF gives the same filtered marginals as a histogram filter and show two implementations: MLMF and scalable-MLMF that both have a linear space complexity. The original MLMF retains an exponential time complexity (although an order of magnitude smaller than the histogram filter while the scalable-MLMF introduced independence assumption such to have a linear time complexity. We further quantitatively demonstrate the scalability of our algorithm with 25 beliefs having up to
Estimation of dislocation density from precession electron diffraction data using the Nye tensor.
Leff, A C; Weinberger, C R; Taheri, M L
2015-06-01
The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics.
Tensor methods for parameter estimation and bifurcation analysis of stochastic reaction networks.
Liao, Shuohao; Vejchodský, Tomáš; Erban, Radek
2015-07-06
Stochastic modelling of gene regulatory networks provides an indispensable tool for understanding how random events at the molecular level influence cellular functions. A common challenge of stochastic models is to calibrate a large number of model parameters against the experimental data. Another difficulty is to study how the behaviour of a stochastic model depends on its parameters, i.e. whether a change in model parameters can lead to a significant qualitative change in model behaviour (bifurcation). In this paper, tensor-structured parametric analysis (TPA) is developed to address these computational challenges. It is based on recently proposed low-parametric tensor-structured representations of classical matrices and vectors. This approach enables simultaneous computation of the model properties for all parameter values within a parameter space. The TPA is illustrated by studying the parameter estimation, robustness, sensitivity and bifurcation structure in stochastic models of biochemical networks. A Matlab implementation of the TPA is available at http://www.stobifan.org.
A Bayesian approach to estimating causal vaccine effects on binary post-infection outcomes.
Zhou, Jincheng; Chu, Haitao; Hudgens, Michael G; Halloran, M Elizabeth
2016-01-15
To estimate causal effects of vaccine on post-infection outcomes, Hudgens and Halloran (2006) defined a post-infection causal vaccine efficacy estimand VEI based on the principal stratification framework. They also derived closed forms for the maximum likelihood estimators of the causal estimand under some assumptions. Extending their research, we propose a Bayesian approach to estimating the causal vaccine effects on binary post-infection outcomes. The identifiability of the causal vaccine effect VEI is discussed under different assumptions on selection bias. The performance of the proposed Bayesian method is compared with the maximum likelihood method through simulation studies and two case studies - a clinical trial of a rotavirus vaccine candidate and a field study of pertussis vaccination. For both case studies, the Bayesian approach provided similar inference as the frequentist analysis. However, simulation studies with small sample sizes suggest that the Bayesian approach provides smaller bias and shorter confidence interval length.
Institute of Scientific and Technical Information of China (English)
ZHANG Jian; YE Jian-shu; ZHAO Xin-ming
2007-01-01
The finite strip controlling equation of pinned curve box was deduced on basis of Novozhilov theory and with flexibility method, and the problem of continuous curve box was resolved. Dynamic Bayesian error function of displacement parameters of continuous curve box was found. The corresponding formulas of dynamic Bayesian expectation and variance were derived. After the method of solving the automatic search of step length was put forward, the optimization estimation computing formulas were also obtained by adapting conjugate gradient method. Then the steps of dynamic Bayesian estimation were given in detail. Through analysis of a classic example, the criterion of judging the precision of the known information is gained as well as some other important conclusions about dynamic Bayesian stochastic estimation of displacement parameters of continuous curve box.
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert
Bayesian Estimation of the DINA Model with Gibbs Sampling
Culpepper, Steven Andrew
2015-01-01
A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…
Improving Mantel-Haenszel DIF Estimation through Bayesian Updating
Zwick, Rebecca; Ye, Lei; Isham, Steven
2012-01-01
This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…
Bayesian Estimation and Hypothesis Tests for a Circular GLM
Mulder, K.T.
2016-01-01
Circular data are data measured in angles or directions. Although they occur in a wide variety of scientific fields, the number of methods for their analysis is limited. We develop a GLM-like model for circular data within the Bayesian framework, using the von Mises distribution. The model allows
Bayesian Approach to the Best Estimate of the Hubble Constant
Institute of Scientific and Technical Information of China (English)
王晓峰; 陈黎; 李宗伟
2001-01-01
A Bayesian approach is used to derive the probability distribution (PD) of the Hubble constant H0 from recent measurements including supernovae Ia, the Tully-Fisher relation, population Ⅱ and physical methods. The discrepancies among these PDs are briefly discussed. The combined value of all the measurements is obtained,with a 95% confidence interval of 58.7 ＜ Ho ＜ 67.3 (km·s-1.Mpc-1).
Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.
Directory of Open Access Journals (Sweden)
Elise Payzan-LeNestour
Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.
A Bayesian Approach for Graph-constrained Estimation for High-dimensional Regression.
Sun, Hokeun; Li, Hongzhe
Many different biological processes are represented by network graphs such as regulatory networks, metabolic pathways, and protein-protein interaction networks. Since genes that are linked on the networks usually have biologically similar functions, the linked genes form molecular modules to affect the clinical phenotypes/outcomes. Similarly, in large-scale genetic association studies, many SNPs are in high linkage disequilibrium (LD), which can also be summarized as a LD graph. In order to incorporate the graph information into regression analysis with high dimensional genomic data as predictors, we introduce a Bayesian approach for graph-constrained estimation (Bayesian GRACE) and regularization, which controls the amount of regularization for sparsity and smoothness of the regression coefficients. The Bayesian estimation with their posterior distributions can provide credible intervals for the estimates of the regression coefficients along with standard errors. The deviance information criterion (DIC) is applied for model assessment and tuning parameter selection. The performance of the proposed Bayesian approach is evaluated through simulation studies and is compared with Bayesian Lasso and Bayesian Elastic-net procedures. We demonstrate our method in an analysis of data from a case-control genome-wide association study of neuroblastoma using a weighted LD graph.
A Bayesian Estimator for Linear Calibration Error Effects in Thermal Remote Sensing
Morgan, J A
2005-01-01
The Bayesian Land Surface Temperature estimator previously developed has been extended to include the effects of imperfectly known gain and offset calibration errors. It is possible to treat both gain and offset as nuisance parameters and, by integrating over an uninformative range for their magnitudes, eliminate the dependence of surface temperature and emissivity estimates upon the exact calibration error.
Genetic analysis of rare disorders: Bayesian estimation of twin concordance rates
van den Berg, Stéphanie Martine; Hjelmborg, J.
2012-01-01
Twin concordance rates provide insight into the possibility of a genetic background for a disease. These concordance rates are usually estimated within a frequentistic framework. Here we take a Bayesian approach. For rare diseases, estimation methods based on asymptotic theory cannot be applied due
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
DEFF Research Database (Denmark)
Pedersen, Thorkild Find
2003-01-01
Rotating and reciprocating mechanical machines emit acoustic noise and vibrations when they operate. Typically, the noise and vibrations are concentrated in narrow frequency bands related to the running speed of the machine. The frequency of the running speed is referred to as the fundamental...... of an adaptive comb filter is derived for tracking non-stationary signals. The estimation problem is then rephrased in terms of the Bayesian statistical framework. In the Bayesian framework both parameters and observations are considered stochastic processes. The result of the estimation is an expression...
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with
Prior processes and their applications nonparametric Bayesian estimation
Phadia, Eswar G
2016-01-01
This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and P...
Inference-less Density Estimation using Copula Bayesian Networks
Elidan, Gal
2012-01-01
We consider learning continuous probabilistic graphical models in the face of missing data. For non-Gaussian models, learning the parameters and structure of such models depends on our ability to perform efficient inference, and can be prohibitive even for relatively modest domains. Recently, we introduced the Copula Bayesian Network (CBN) density model - a flexible framework that captures complex high-dimensional dependency structures while offering direct control over the univariate marginals, leading to improved generalization. In this work we show that the CBN model also offers significant computational advantages when training data is partially observed. Concretely, we leverage on the specialized form of the model to derive a computationally amenable learning objective that is a lower bound on the log-likelihood function. Importantly, our energy-like bound circumvents the need for costly inference of an auxiliary distribution, thus facilitating practical learning of highdimensional densities. We demonstr...
Estimation of dislocation density from precession electron diffraction data using the Nye tensor
Energy Technology Data Exchange (ETDEWEB)
Leff, A.C. [Department of Materials Science & Engineering, Drexel University, Philadelphia, PA (United States); Weinberger, C.R. [Department of Mechanical Engineering and Mechanics, Drexel University, Philadelphia, PA (United States); Taheri, M.L., E-mail: mtaheri@coe.drexel.edu [Department of Materials Science & Engineering, Drexel University, Philadelphia, PA (United States)
2015-06-15
The Nye tensor offers a means to estimate the geometrically necessary dislocation density of a crystalline sample based on measurements of the orientation changes within individual crystal grains. In this paper, the Nye tensor theory is applied to precession electron diffraction automated crystallographic orientation mapping (PED-ACOM) data acquired using a transmission electron microscope (TEM). The resulting dislocation density values are mapped in order to visualize the dislocation structures present in a quantitative manner. These density maps are compared with other related methods of approximating local strain dependencies in dislocation-based microstructural transitions from orientation data. The effect of acquisition parameters on density measurements is examined. By decreasing the step size and spot size during data acquisition, an increasing fraction of the dislocation content becomes accessible. Finally, the method described herein is applied to the measurement of dislocation emission during in situ annealing of Cu in TEM in order to demonstrate the utility of the technique for characterizing microstructural dynamics. - Highlights: • Developed a method of mapping GND density using orientation mapping data from TEM. • As acquisition length-scale is decreased, all dislocations are considered GNDs. • Dislocation emission and corresponding grain rotation quantified.
Bayesian Estimates of Autocorrelations in Single-Case Designs
Shadish, William R.; Rindskopf, David M.; Hedges, Larry V.; Sullivan, Kristynn J.
2012-01-01
Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to…
Bayesian estimation inherent in a Mexican-hat-type neural network.
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Directory of Open Access Journals (Sweden)
Hua Xu
2013-01-01
Full Text Available Estimation of distribution algorithms (EDAs, as an extension of genetic algorithms, samples new solutions from the probabilistic model, which characterizes the distribution of promising solutions in the search space at each generation. This paper introduces and evaluates a novel estimation of a distribution algorithm, called L1-regularized Bayesian optimization algorithm, L1BOA. In L1BOA, Bayesian networks as probabilistic models are learned in two steps. First, candidate parents of each variable in Bayesian networks are detected by means of L1-regularized logistic regression, with the aim of leading a sparse but nearly optimized network structure. Second, the greedy search, which is restricted to the candidate parent-child pairs, is deployed to identify the final structure. Compared with the Bayesian optimization algorithm (BOA, L1BOA improves the efficiency of structure learning due to the reduction and automated control of network complexity introduced with L1-regularized learning. Experimental studies on different types of benchmark problems show that L1BOA not only outperforms BOA when no prior knowledge about problem structure is available, but also achieves and even exceeds the best performance of BOA that applies explicit controls on network complexity. Furthermore, Bayesian networks built by L1BOA and BOA during evolution are analysed and compared, which demonstrates that L1BOA is able to build simpler, yet more accurate probabilistic models.
Bayesian estimation inherent in a Mexican-hat-type neural network
Takiyama, Ken
2016-05-01
Brain functions, such as perception, motor control and learning, and decision making, have been explained based on a Bayesian framework, i.e., to decrease the effects of noise inherent in the human nervous system or external environment, our brain integrates sensory and a priori information in a Bayesian optimal manner. However, it remains unclear how Bayesian computations are implemented in the brain. Herein, I address this issue by analyzing a Mexican-hat-type neural network, which was used as a model of the visual cortex, motor cortex, and prefrontal cortex. I analytically demonstrate that the dynamics of an order parameter in the model corresponds exactly to a variational inference of a linear Gaussian state-space model, a Bayesian estimation, when the strength of recurrent synaptic connectivity is appropriately stronger than that of an external stimulus, a plausible condition in the brain. This exact correspondence can reveal the relationship between the parameters in the Bayesian estimation and those in the neural network, providing insight for understanding brain functions.
Bayesian estimation of the multifractality parameter for image texture using a Whittle approximation
Combrexelle, Sébastien; Dobigeon, Nicolas; Tourneret, Jean-Yves; McLaughlin, Steve; Abry, Patrice
2014-01-01
Texture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the two-dimensional (2D) dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold: The construction of a generic semi-parametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this ...
Modeling hypoxia in the Chesapeake Bay: Ensemble estimation using a Bayesian hierarchical model
Stow, Craig A.; Scavia, Donald
2009-02-01
Quantifying parameter and prediction uncertainty in a rigorous framework can be an important component of model skill assessment. Generally, models with lower uncertainty will be more useful for prediction and inference than models with higher uncertainty. Ensemble estimation, an idea with deep roots in the Bayesian literature, can be useful to reduce model uncertainty. It is based on the idea that simultaneously estimating common or similar parameters among models can result in more precise estimates. We demonstrate this approach using the Streeter-Phelps dissolved oxygen sag model fit to 29 years of data from Chesapeake Bay. Chesapeake Bay has a long history of bottom water hypoxia and several models are being used to assist management decision-making in this system. The Bayesian framework is particularly useful in a decision context because it can combine both expert-judgment and rigorous parameter estimation to yield model forecasts and a probabilistic estimate of the forecast uncertainty.
Varvia, Petri; Rautiainen, Miina; Seppänen, Aku
2017-04-01
Hyperspectral remote sensing data carry information on the leaf area index (LAI) of forests, and thus in principle, LAI can be estimated based on the data by inverting a forest reflectance model. However, LAI is usually not the only unknown in a reflectance model; especially, the leaf spectral albedo and understory reflectance are also not known. If the uncertainties of these parameters are not accounted for, the inversion of a forest reflectance model can lead to biased estimates for LAI. In this paper, we study the effects of reflectance model uncertainties on LAI estimates, and further, investigate whether the LAI estimates could recover from these uncertainties with the aid of Bayesian inference. In the proposed approach, the unknown leaf albedo and understory reflectance are estimated simultaneously with LAI from hyperspectral remote sensing data. The feasibility of the approach is tested with numerical simulation studies. The results show that in the presence of unknown parameters, the Bayesian LAI estimates which account for the model uncertainties outperform the conventional estimates that are based on biased model parameters. Moreover, the results demonstrate that the Bayesian inference can also provide feasible measures for the uncertainty of the estimated LAI.
Multi-Pitch Estimation and Tracking Using Bayesian Inference in Block Sparsity
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jakobsson, Andreas; Jensen, Jesper Rindom
2015-01-01
tracking of the found sources, without posing detailed a priori assumptions of the number of harmonics for each source. The method incorporates a Bayesian prior and assigns data-dependent regularization coefficients to efficiently incorporate both earlier and future data blocks in the tracking of estimates...
Energy Technology Data Exchange (ETDEWEB)
Terwilliger, Thomas C [Los Alamos National Laboratory; Adams, Paul D [LBNL; Read, Randy J [UNIV OF CAMBRIDGE; Mccoy, Airlie J [UNIV OF CAMBRIDGE
2008-01-01
Ten measures of experimental electron-density-map quality are examined and the skewness of electron density is found to be the best indicator of actual map quality. A Bayesian approach to estimating map quality is developed and used in the PHENIX AutoSol wizard to make decisions during automated structure solution.
Estimating the tensor-to-scalar ratio and the effect of residual foreground contamination
Energy Technology Data Exchange (ETDEWEB)
Fantaye, Y.; Leach, S.M.; Baccigalupi, C. [SISSA, Astrophysics Sector, via Bonomea 265, Trieste 34136 (Italy); Stivoli, F. [INRIA, Laboratoire de Recherche en Informatique, Université Paris-Sud 11, Bâtiment 490, 91405 Orsay Cedex (France); Grain, J. [CNRS, Institut d' Astrophysique Spatiale, Université Paris-Sud 11, Bâtiments 120-121, 91405 Orsay Cedex (France); Tristram, M. [CNRS, Laboratoire de l' Accélérateur Linéaire, Université Paris-Sud 11, Bâtiment 200, 91898 Orsay Cedex (France); Stompor, R., E-mail: fantaye@sissa.it, E-mail: stivoli@gmail.com, E-mail: julien.grain@ias.u-psud.fr, E-mail: leach@sissa.it, E-mail: tristram@lal.in2p3.fr, E-mail: bacci@sissa.it, E-mail: radek@apc.univ-paris7.fr [CNRS, Laboratoire Astroparticule and Cosmologie, 10 rue A. Domon et L. Duquet, F-75205 Paris Cedex 13 (France)
2011-08-01
We consider future balloon-borne and ground-based suborbital experiments designed to search for inflationary gravitational waves, and investigate the impact of residual foregrounds that remain in the estimated cosmic microwave background maps. This is achieved by propagating foreground modelling uncertainties from the component separation, under the assumption of a spatially uniform foreground frequency scaling, through to the power spectrum estimates, and up to measurement of the tensor to scalar ratio in the parameter estimation step. We characterize the error covariance due to subtracted foregrounds, and find it to be subdominant compared to instrumental noise and sample variance in our simulated data analysis. We model the unsubtracted residual foreground contribution using a two-parameter power law and show that marginalization over these foreground parameters is effective in accounting for a bias due to excess foreground power at low l. We conclude that, at least in the suborbital experimental setups we have simulated, foreground errors may be modeled and propagated up to parameter estimation with only a slight degradation of the target sensitivity of these experiments derived neglecting the presence of the foregrounds.
Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy
2012-01-01
. The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....
Directory of Open Access Journals (Sweden)
Shu Wing Ho
2011-12-01
Full Text Available The valuation of options and many other derivative instruments requires an estimation of exante or forward looking volatility. This paper adopts a Bayesian approach to estimate stock price volatility. We find evidence that overall Bayesian volatility estimates more closely approximate the implied volatility of stocks derived from traded call and put options prices compared to historical volatility estimates sourced from IVolatility.com (“IVolatility”. Our evidence suggests use of the Bayesian approach to estimate volatility can provide a more accurate measure of ex-ante stock price volatility and will be useful in the pricing of derivative securities where the implied stock price volatility cannot be observed.
Directory of Open Access Journals (Sweden)
Bin Ran
Full Text Available Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
TensorLy: Tensor Learning in Python
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning. Writte
TensorLy: Tensor learning in Python
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning. Writt
TensorLy: Tensor Learning in Python
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning.
Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L
2016-02-10
Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure.
Totaro, C.; Orecchio, B.; Presti, D.; Scolaro, S.; Neri, G.
2016-09-01
A new high-quality waveform inversion focal mechanism database of the Calabrian Arc region has been compiled by integrating 292 mechanisms selected from literature and catalogs with 146 newly computed solutions. The new database has then been used for computation of posterior density distributions of stress tensor components by a Bayesian method never applied in south Italy before the present study. The application of this method to the enhanced database has allowed us to provide a detailed picture of seismotectonic stress regimes in this very complex area where lithospheric unit configuration and geodynamic engines are still strongly debated. Our results well constrain the extensional domain of Calabrian Arc and the compressional one of the southernmost Tyrrhenian Sea. In addition, previously undetected transcurrent regimes have been identified in the Ionian offshore. The new information released here will furnish useful tools and constraints for future geodynamic investigations.
Estimating dependability of programmable systems using bayesian belief nets
Energy Technology Data Exchange (ETDEWEB)
Gran, Bjoern Axel; Dahll, Gustav
2000-05-15
The research programme at the Halden Project on software safety assessment is augmented through a joint project with Kongsberg Defence and Aerospace AS and Det Norske Veritas. The objective of this project is to investigate the possibility to combine the Bayesian Belief Net (BBN) methodology with a software safety standard. The report discusses software safety standards in general, with respect to how they can be used to measure software safety. The possibility to transfer the requirements of a software safety standard into a BBN is also investigated. The aim is to utilise the BBN methodology and associated tools, by transferring the software safety measurement into a probabilistic quantity. In this way software can be included in a total probabilistic safety analysis. This project was performed by applying the method for an evaluation of a real, safety related programmable system which was developed according to the avionic standard DO-178B. The test case, the standard, and the BBN methodology are shortly described. This is followed by a description of the construction of the BBN used in this project. This includes the topology of the BBN, the elicitation of probabilities and the making of observations. Based on this a variety of computations are made using the SERENE methodology and the HUGIN tool. Observations and conclusions are made on the basis of the findings from this process. This report should be considered as a progress report in a more long-term activity on the use of BBNs as support for safety assessment of programmable systems. (Author). 23 refs., 9 figs., tabs
Eadie, Gwendolyn; Harris, William
2016-01-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie, Harris, & Widrow (2015) and Eadie & Harris (2016) and builds upon the preliminary reports by Eadie et al (2015a,c). The method uses a distribution function $f(\\mathcal{E},L)$ to model the galaxy and kinematic data from satellite objects such as globular clusters to trace the Galaxy's gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie & Harris (2016), and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and in...
Institute of Scientific and Technical Information of China (English)
Sheng Zheng
2013-01-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem.This paper deals with the RFC problem in a Bayesian framework.It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique,which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework.In contrast to the global optimization algorithm,the Bayesian-MCMC can obtain not only the approximate solutions,but also the probability distributions of the solutions,that is,uncertainty analyses of solutions.The Bayesian-MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar seaclutter data.Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter.The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
Bayesian Regression and Neuro-Fuzzy Methods Reliability Assessment for Estimating Streamflow
Directory of Open Access Journals (Sweden)
Yaseen A. Hamaamin
2016-07-01
Full Text Available Accurate and efficient estimation of streamflow in a watershed’s tributaries is prerequisite parameter for viable water resources management. This study couples process-driven and data-driven methods of streamflow forecasting as a more efficient and cost-effective approach to water resources planning and management. Two data-driven methods, Bayesian regression and adaptive neuro-fuzzy inference system (ANFIS, were tested separately as a faster alternative to a calibrated and validated Soil and Water Assessment Tool (SWAT model to predict streamflow in the Saginaw River Watershed of Michigan. For the data-driven modeling process, four structures were assumed and tested: general, temporal, spatial, and spatiotemporal. Results showed that both Bayesian regression and ANFIS can replicate global (watershed and local (subbasin results similar to a calibrated SWAT model. At the global level, Bayesian regression and ANFIS model performance were satisfactory based on Nash-Sutcliffe efficiencies of 0.99 and 0.97, respectively. At the subbasin level, Bayesian regression and ANFIS models were satisfactory for 155 and 151 subbasins out of 155 subbasins, respectively. Overall, the most accurate method was a spatiotemporal Bayesian regression model that outperformed other models at global and local scales. However, all ANFIS models performed satisfactory at both scales.
A Bayesian Framework for Remaining Useful Life Estimation
National Aeronautics and Space Administration — The estimation of remaining useful life (RUL) of a faulty component is at the center of system prognostics and health management. It gives operators a potent tool in...
The unbearable uncertainty of Bayesian divergence time estimation
Institute of Scientific and Technical Information of China (English)
Mario DOS REIS; Ziheng YANG
2013-01-01
Divergence time estimation using molecular sequence data relying on uncertain fossil calibrations is an unconventional statistical estimation problem.As the sequence data provide information about the distances only,estimation of absolute times and rates has to rely on information in the prior,so that the model is only semiidentifiable.In this paper,we use a combination of mathematical analysis,computer simulation,and real data analysis to examine the uncertainty in posterior time estimates when the amount of sequence data increases.The analysis extends the infinite-sites theory of Yang and Rannala,which predicts the posterior distribution of divergence times and rate when the amount of data approaches infinity.We found that the posterior credibility interval in general decreases and reaches a non-zero limit when the data size increases.However,for the node with the most precise fossil calibration (as measured by the interval width divided by the mid value),sequence data do not really make the time estimate any more precise.We propose a finite-sites theory which predicts that the square of the posterior interval width approaches its infinite-data limit at the rate 1/n,where n is the sequence length.We suggest a procedure to partition the uncertainty of posterior time estimates into that due to uncertainties in fossil calibrations and that due to sampling errors in the sequence data.We evaluate the impact of conflicting fossil calibrations on posterior time estimation and point out that narrow credibility intervals or overly precise time estimates can be produced by conflicting or erroneous fossil calibrations.
SNP based heritability estimation using a Bayesian approach
DEFF Research Database (Denmark)
Krag, Kristian; Janss, Luc; Mahdi Shariati, Mohammad;
2013-01-01
Heritability is a central element in quantitative genetics. New molecular markers to assess genetic variance and heritability are continually under development. The availability of molecular single nucleotide polymorphism (SNP) markers can be applied for estimation of variance components and heri......Heritability is a central element in quantitative genetics. New molecular markers to assess genetic variance and heritability are continually under development. The availability of molecular single nucleotide polymorphism (SNP) markers can be applied for estimation of variance components...
Regularized Positive-Definite Fourth Order Tensor Field Estimation from DW-MRI★
2008-01-01
In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex...
Asymptotic accuracy of Bayesian estimation for a single latent variable.
Yamazaki, Keisuke
2015-09-01
In data science and machine learning, hierarchical parametric models, such as mixture models, are often used. They contain two kinds of variables: observable variables, which represent the parts of the data that can be directly measured, and latent variables, which represent the underlying processes that generate the data. Although there has been an increase in research on the estimation accuracy for observable variables, the theoretical analysis of estimating latent variables has not been thoroughly investigated. In a previous study, we determined the accuracy of a Bayes estimation for the joint probability of the latent variables in a dataset, and we proved that the Bayes method is asymptotically more accurate than the maximum-likelihood method. However, the accuracy of the Bayes estimation for a single latent variable remains unknown. In the present paper, we derive the asymptotic expansions of the error functions, which are defined by the Kullback-Leibler divergence, for two types of single-variable estimations when the statistical regularity is satisfied. Our results indicate that the accuracies of the Bayes and maximum-likelihood methods are asymptotically equivalent and clarify that the Bayes method is only advantageous for multivariable estimations.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
Energy Technology Data Exchange (ETDEWEB)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the current state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.
Energy Technology Data Exchange (ETDEWEB)
Heasler, Patrick G.; Posse, Christian; Hylden, Jeff L.; Anderson, Kevin K.
2007-06-13
This paper presents a nonlinear Bayesian regression algorithm for the purpose of detecting and estimating gas plume content from hyper-spectral data. Remote sensing data, by its very nature, is collected under less controlled conditions than laboratory data. As a result, the physics-based model that is used to describe the relationship between the observed remotesensing spectra, and the terrestrial (or atmospheric) parameters that we desire to estimate, is typically littered with many unknown "nuisance" parameters (parameters that we are not interested in estimating, but also appear in the model). Bayesian methods are well-suited for this context as they automatically incorporate the uncertainties associated with all nuisance parameters into the error estimates of the parameters of interest. The nonlinear Bayesian regression methodology is illustrated on realistic simulated data from a three-layer model for longwave infrared (LWIR) measurements from a passive instrument. This shows that this approach should permit more accurate estimation as well as a more reasonable description of estimate uncertainty.
Bayesian optimization of perfusion and transit time estimation in PASL-MRI.
Santos, Nuno; Sanches, João; Figueiredo, Patrícia
2010-01-01
Pulsed Arterial Spin Labeling (PASL) techniques potentially allow the absolute, non-invasive quantification of brain perfusion and arterial transit time. This can be achieved by fitting a kinetic model to the data acquired at a number of inversion time points (TI). The intrinsically low SNR of PASL data, together with the uncertainty in the model parameters, can hinder the estimation of the parameters of interest. Here, a two-compartment kinetic model is used to estimate perfusion and transit time, based on a Maximum a Posteriori (MAP) criterion. A priori information concerning the physiological variation of the multiple model parameters is used to guide the solution. Monte Carlo simulations are performed to compare the accuracy of our proposed Bayesian estimation method with a conventional Least Squares (LS) approach, using four different sets of TI points. Each set is obtained either with a uniform distribution or an optimal sampling strategy designed based on the same MAP criterion. We show that the estimation errors are minimized when our proposed Bayesian estimation method is employed in combination with an optimal set of sampling points. In conclusion, our results indicate that PASL perfusion and transit time measurements would benefit from a Bayesian approach for the optimization of both the sampling strategy and the estimation algorithm, whereby prior information on the parameters is used.
Eadie, Gwendolyn M.; Springford, Aaron; Harris, William E.
2017-02-01
We present a hierarchical Bayesian method for estimating the total mass and mass profile of the Milky Way Galaxy. The new hierarchical Bayesian approach further improves the framework presented by Eadie et al. and Eadie and Harris and builds upon the preliminary reports by Eadie et al. The method uses a distribution function f({ E },L) to model the Galaxy and kinematic data from satellite objects, such as globular clusters (GCs), to trace the Galaxy’s gravitational potential. A major advantage of the method is that it not only includes complete and incomplete data simultaneously in the analysis, but also incorporates measurement uncertainties in a coherent and meaningful way. We first test the hierarchical Bayesian framework, which includes measurement uncertainties, using the same data and power-law model assumed in Eadie and Harris and find the results are similar but more strongly constrained. Next, we take advantage of the new statistical framework and incorporate all possible GC data, finding a cumulative mass profile with Bayesian credible regions. This profile implies a mass within 125 kpc of 4.8× {10}11{M}ȯ with a 95% Bayesian credible region of (4.0{--}5.8)× {10}11{M}ȯ . Our results also provide estimates of the true specific energies of all the GCs. By comparing these estimated energies to the measured energies of GCs with complete velocity measurements, we observe that (the few) remote tracers with complete measurements may play a large role in determining a total mass estimate of the Galaxy. Thus, our study stresses the need for more remote tracers with complete velocity measurements.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-10-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.
Directory of Open Access Journals (Sweden)
Hamid Reza Khalkhali
2016-09-01
Full Text Available Background Often, there is no access to sufficient sample size to estimate the prevalence using the method of direct estimator in all areas. The aim of this study was to compare small area’s Bayesian method and direct method in estimating the prevalence of steatosis in obese and overweight children. Materials and Methods: In this cross-sectional study, was conducted on 150 overweight and obese children aged 2 to 15 years referred to the Children's digestive clinic of Urmia University of Medical Sciences- Iran, in 2013. After Body mass index (BMI calculation, children with overweight and obese were assessed in terms of primary tests of obesity screening. Then children with steatosis confirmed by abdominal Ultrasonography, were referred to the laboratory for doing further tests. Steatosis prevalence was estimated by direct and Bayesian method and their efficiency were evaluated using mean-square error Jackknife method. The study data was analyzed using the open BUGS3.1.2 and R2.15.2 software. Results: The findings indicated that estimation of steatosis prevalence in children using Bayesian and direct methods were between 0.3098 to 0.493, and 0.355 to 0.560 respectively, in Health Districts; 0.3098 to 0.502, and 0.355 to 0.550 in Education Districts; 0.321 to 0.582, and 0.357 to 0.615 in age groups; 0.313 to 0.429, and 0.383 to 0.536 in sex groups. In general, according to the results, mean-square error of Bayesian estimation was smaller than direct estimation (P
Estimating Correlations with Missing Data, A Bayesian Approach.
Gross, Alan L.; Torres-Quevedo, Rocio
1995-01-01
The posterior distribution of the bivariate correlation is analytically derived given a data set where "X" is completely observed, but "Y" is missing at random for a portion of the sample. Interval estimates of the correlation are constructed from the posterior distribution in terms of the highest density regions. (SLD)
Fully Bayesian Estimation of Data from Single Case Designs
Rindskopf, David
2013-01-01
Single case designs (SCDs) generally consist of a small number of short time series in two or more phases. The analysis of SCDs statistically fits in the framework of a multilevel model, or hierarchical model. The usual analysis does not take into account the uncertainty in the estimation of the random effects. This not only has an effect on the…
A Sparse Bayesian Learning Algorithm With Dictionary Parameter Estimation
DEFF Research Database (Denmark)
Hansen, Thomas Lundgaard; Badiu, Mihai Alin; Fleury, Bernard Henri
2014-01-01
This paper concerns sparse decomposition of a noisy signal into atoms which are specified by unknown continuous-valued parameters. An example could be estimation of the model order, frequencies and amplitudes of a superposition of complex sinusoids. The common approach is to reduce the continuous...
Estimating Correlations with Missing Data, A Bayesian Approach.
Gross, Alan L.; Torres-Quevedo, Rocio
1995-01-01
The posterior distribution of the bivariate correlation is analytically derived given a data set where "X" is completely observed, but "Y" is missing at random for a portion of the sample. Interval estimates of the correlation are constructed from the posterior distribution in terms of the highest density regions. (SLD)
Directory of Open Access Journals (Sweden)
Robertson Patrick
2010-01-01
Full Text Available Multipath is today still one of the most critical problems in satellite navigation, in particular in urban environments, where the received navigation signals can be affected by blockage, shadowing, and multipath reception. Latest multipath mitigation algorithms are based on the concept of sequential Bayesian estimation and improve the receiver performance by exploiting the temporal constraints of the channel dynamics. In this paper, we specifically address the problem of estimating and adjusting the number of multipath replicas that is considered by the receiver algorithm. An efficient implementation via a two-fold marginalized Bayesian filter is presented, in which a particle filter, grid-based filters, and Kalman filters are suitably combined in order to mitigate the multipath channel by efficiently estimating its time-variant parameters in a track-before-detect fashion. Results based on an experimentally derived set of channel data corresponding to a typical urban propagation environment are used to confirm the benefit of our novel approach.
Matsumoto, S.
2016-09-01
The stress field is a key factor controlling earthquake occurrence and crustal evolution. In this study, we propose an approach for determining the stress field in a region using seismic moment tensors, based on the classical equation in plasticity theory. Seismic activity is a phenomenon that relaxes crustal stress and creates plastic strain in a medium because of faulting, which suggests that the medium could behave as a plastic body. Using the constitutive relation in plastic theory, the increment of the plastic strain tensor is proportional to the deviatoric stress tensor. Simple mathematical manipulation enables the development of an inversion method for estimating the stress field in a region. The method is tested on shallow earthquakes occurring on Kyushu Island, Japan.
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
TensorLy: Tensor Learning in Python
Kossaifi, Jean; Panagakis, Yannis; Pantic, Maja
2016-01-01
Tensor methods are gaining increasing traction in machine learning. However, there are scant to no resources available to perform tensor learning and decomposition in Python. To answer this need we developed TensorLy. TensorLy is a state of the art general purpose library for tensor learning. Written in Python, it aims at following the same standard adopted by the main projects of the Python scientific community and fully integrating with these. It allows for fast and straightforward tensor d...
Bayesian STSA estimation using masking properties and generalized Gamma prior for speech enhancement
Parchami, Mahdi; Zhu, Wei-Ping; Champagne, Benoit; Plourde, Eric
2015-12-01
We consider the estimation of the speech short-time spectral amplitude (STSA) using a parametric Bayesian cost function and speech prior distribution. First, new schemes are proposed for the estimation of the cost function parameters, using an initial estimate of the speech STSA along with the noise masking feature of the human auditory system. This information is further employed to derive a new technique for the gain flooring of the STSA estimator. Next, to achieve better compliance with the noisy speech in the estimator's gain function, we take advantage of the generalized Gamma distribution in order to model the STSA prior and propose an SNR-based scheme for the estimation of its corresponding parameters. It is shown that in Bayesian STSA estimators, the exploitation of a rough STSA estimate in the parameter selection for the cost function and the speech prior leads to more efficient control on the gain function values. Performance evaluation in different noisy scenarios demonstrates the superiority of the proposed methods over the existing parametric STSA estimators in terms of the achieved noise reduction and introduced speech distortion.
Estimating uncertainty and reliability of social network data using Bayesian inference.
Farine, Damien R; Strandburg-Peshkin, Ariana
2015-09-01
Social network analysis provides a useful lens through which to view the structure of animal societies, and as a result its use is increasingly widespread. One challenge that many studies of animal social networks face is dealing with limited sample sizes, which introduces the potential for a high level of uncertainty in estimating the rates of association or interaction between individuals. We present a method based on Bayesian inference to incorporate uncertainty into network analyses. We test the reliability of this method at capturing both local and global properties of simulated networks, and compare it to a recently suggested method based on bootstrapping. Our results suggest that Bayesian inference can provide useful information about the underlying certainty in an observed network. When networks are well sampled, observed networks approach the real underlying social structure. However, when sampling is sparse, Bayesian inferred networks can provide realistic uncertainty estimates around edge weights. We also suggest a potential method for estimating the reliability of an observed network given the amount of sampling performed. This paper highlights how relatively simple procedures can be used to estimate uncertainty and reliability in studies using animal social network analysis.
Directory of Open Access Journals (Sweden)
Pengpeng Jiao
2014-01-01
Full Text Available Time-dependent turning movement flows are very important input data for intelligent transportation systems but are impossible to be detected directly through current traffic surveillance systems. Existing estimation models have proved to be not accurate and reliable enough during all intervals. An improved way to address this problem is to develop a combined model framework that can integrate multiple submodels running simultaneously. This paper first presents a back propagation neural network model to estimate dynamic turning movements, as well as the self-adaptive learning rate approach and the gradient descent with momentum method for solving. Second, this paper develops an efficient Kalman filtering model and designs a revised sequential Kalman filtering algorithm. Based on the Bayesian method using both historical data and currently estimated results for error calibration, this paper further integrates above two submodels into a Bayesian combined model framework and proposes a corresponding algorithm. A field survey is implemented at an intersection in Beijing city to collect both time series of link counts and actual time-dependent turning movement flows, including historical and present data. The reported estimation results show that the Bayesian combined model is much more accurate and stable than other models.
Directory of Open Access Journals (Sweden)
Y. Paudel
2013-03-01
Full Text Available This study applies Bayesian Inference to estimate flood risk for 53 dyke ring areas in the Netherlands, and focuses particularly on the data scarcity and extreme behaviour of catastrophe risk. The probability density curves of flood damage are estimated through Monte Carlo simulations. Based on these results, flood insurance premiums are estimated using two different practical methods that each account in different ways for an insurer's risk aversion and the dispersion rate of loss data. This study is of practical relevance because insurers have been considering the introduction of flood insurance in the Netherlands, which is currently not generally available.
An, Lihua; Fung, Karen Y; Krewski, Daniel
2010-09-01
Spontaneous adverse event reporting systems are widely used to identify adverse reactions to drugs following their introduction into the marketplace. In this article, a James-Stein type shrinkage estimation strategy was developed in a Bayesian logistic regression model to analyze pharmacovigilance data. This method is effective in detecting signals as it combines information and borrows strength across medically related adverse events. Computer simulation demonstrated that the shrinkage estimator is uniformly better than the maximum likelihood estimator in terms of mean squared error. This method was used to investigate the possible association of a series of diabetic drugs and the risk of cardiovascular events using data from the Canada Vigilance Online Database.
Improved Estimates of the Milky Way's Disk Scale Length From Hierarchical Bayesian Techniques
Licquia, Timothy C
2016-01-01
The exponential scale length ($L_d$) of the Milky Way's (MW's) disk is a critical parameter for describing the global physical size of our Galaxy, important both for interpreting other Galactic measurements and helping us to understand how our Galaxy fits into extragalactic contexts. Unfortunately, current estimates span a wide range of values and often are statistically incompatible with one another. Here, we aim to determine an improved, aggregate estimate for $L_d$ by utilizing a hierarchical Bayesian (HB) meta-analysis technique that accounts for the possibility that any one measurement has not properly accounted for all statistical or systematic errors. Within this machinery we explore a variety of ways of modeling the nature of problematic measurements, and then use a Bayesian model averaging technique to derive net posterior distributions that incorporate any model-selection uncertainty. Our meta-analysis combines 29 different (15 visible and 14 infrared) photometric measurements of $L_d$ available in ...
Kennedy, Paula L; Woodbury, Allan D
2002-01-01
In ground water flow and transport modeling, the heterogeneous nature of porous media has a considerable effect on the resulting flow and solute transport. Some method of generating the heterogeneous field from a limited dataset of uncertain measurements is required. Bayesian updating is one method that interpolates from an uncertain dataset using the statistics of the underlying probability distribution function. In this paper, Bayesian updating was used to determine the heterogeneous natural log transmissivity field for a carbonate and a sandstone aquifer in southern Manitoba. It was determined that the transmissivity in m2/sec followed a natural log normal distribution for both aquifers with a mean of -7.2 and - 8.0 for the carbonate and sandstone aquifers, respectively. The variograms were calculated using an estimator developed by Li and Lake (1994). Fractal nature was not evident in the variogram from either aquifer. The Bayesian updating heterogeneous field provided good results even in cases where little data was available. A large transmissivity zone in the sandstone aquifer was created by the Bayesian procedure, which is not a reflection of any deterministic consideration, but is a natural outcome of updating a prior probability distribution function with observations. The statistical model returns a result that is very reasonable; that is homogeneous in regions where little or no information is available to alter an initial state. No long range correlation trends or fractal behavior of the log-transmissivity field was observed in either aquifer over a distance of about 300 km.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the
2006-01-01
Tuberculosis can be studied at the population level by genotyping strains of Mycobacterium tuberculosis isolated from patients. We use an approximate Bayesian computational method in combination with a stochastic model of tuberculosis transmission and mutation of a molecular marker to estimate the net transmission rate, the doubling time, and the reproductive value of the pathogen. This method is applied to a published data set from San Francisco of tuberculosis genotypes based on the marker ...
Bayesian mass and age estimates for transiting exoplanet host stars
Maxted, P F L; Southworth, J
2014-01-01
The mean density of a star transited by a planet, brown dwarf or low mass star can be accurately measured from its light curve. This measurement can be combined with other observations to estimate its mass and age by comparison with stellar models. Our aim is to calculate the posterior probability distributions for the mass and age of a star given its density, effective temperature, metallicity and luminosity. We computed a large grid of stellar models that densely sample the appropriate mass and metallicity range. The posterior probability distributions are calculated using a Markov-chain Monte-Carlo method. The method has been validated by comparison to the results of other stellar models and by applying the method to stars in eclipsing binary systems with accurately measured masses and radii. We have explored the sensitivity of our results to the assumed values of the mixing-length parameter, $\\alpha_{\\rm MLT}$, and initial helium mass fraction, Y. For a star with a mass of 0.9 solar masses and an age of 4...
Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop-Signal Reaction Time Distributions
Directory of Open Access Journals (Sweden)
Dora eMatzke
2013-12-01
Full Text Available The stop-signal paradigm is frequently used to study response inhibition. Inthis paradigm, participants perform a two-choice response time task wherethe primary task is occasionally interrupted by a stop-signal that promptsparticipants to withhold their response. The primary goal is to estimatethe latency of the unobservable stop response (stop signal reaction timeor SSRT. Recently, Matzke, Dolan, Logan, Brown, and Wagenmakers (inpress have developed a Bayesian parametric approach that allows for theestimation of the entire distribution of SSRTs. The Bayesian parametricapproach assumes that SSRTs are ex-Gaussian distributed and uses Markovchain Monte Carlo sampling to estimate the parameters of the SSRT distri-bution. Here we present an efficient and user-friendly software implementa-tion of the Bayesian parametric approach —BEESTS— that can be appliedto individual as well as hierarchical stop-signal data. BEESTS comes withan easy-to-use graphical user interface and provides users with summarystatistics of the posterior distribution of the parameters as well various diag-nostic tools to assess the quality of the parameter estimates. The softwareis open source and runs on Windows and OS X operating systems. In sum,BEESTS allows experimental and clinical psychologists to estimate entiredistributions of SSRTs and hence facilitates the more rigorous analysis ofstop-signal data.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python
Directory of Open Access Journals (Sweden)
Thomas V Wiecki
2013-08-01
Full Text Available The diffusion model is a commonly used tool to infer latent psychological processes underlying decision making, and to link them to neural mechanisms based on reaction times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of reaction time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model, which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject / condition than non-hierarchical method, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g. fMRI influence decision making parameters. This paper will first describe the theoretical background of drift-diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the chi-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs
Energy Technology Data Exchange (ETDEWEB)
Kim, Joo Yeon; Lee, Seung Hyun; Park, Tai Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)
2016-06-15
Any real application of Bayesian inference must acknowledge that both prior distribution and likelihood function have only been specified as more or less convenient approximations to whatever the analyzer's true belief might be. If the inferences from the Bayesian analysis are to be trusted, it is important to determine that they are robust to such variations of prior and likelihood as might also be consistent with the analyzer's stated beliefs. The robust Bayesian inference was applied to atmospheric dispersion assessment using Gaussian plume model. The scopes of contaminations were specified as the uncertainties of distribution type and parametric variability. The probabilistic distribution of model parameters was assumed to be contaminated as the symmetric unimodal and unimodal distributions. The distribution of the sector-averaged relative concentrations was then calculated by applying the contaminated priors to the model parameters. The sector-averaged concentrations for stability class were compared by applying the symmetric unimodal and unimodal priors, respectively, as the contaminated one based on the class of ε-contamination. Though ε was assumed as 10%, the medians reflecting the symmetric unimodal priors were nearly approximated within 10% compared with ones reflecting the plausible ones. However, the medians reflecting the unimodal priors were approximated within 20% for a few downwind distances compared with ones reflecting the plausible ones. The robustness has been answered by estimating how the results of the Bayesian inferences are robust to reasonable variations of the plausible priors. From these robust inferences, it is reasonable to apply the symmetric unimodal priors for analyzing the robustness of the Bayesian inferences.
Estimation model of life insurance claims risk for cancer patients by using Bayesian method
Sukono; Suyudi, M.; Islamiyati, F.; Supian, S.
2017-01-01
This paper discussed the estimation model of the risk of life insurance claims for cancer patients using Bayesian method. To estimate the risk of the claim, the insurance participant data is grouped into two: the number of policies issued and the number of claims incurred. Model estimation is done using a Bayesian approach method. Further, the estimator model was used to estimate the risk value of life insurance claims each age group for each sex. The estimation results indicate that a large risk premium for insured males aged less than 30 years is 0.85; for ages 30 to 40 years is 3:58; for ages 41 to 50 years is 1.71; for ages 51 to 60 years is 2.96; and for those aged over 60 years is 7.82. Meanwhile, for insured women aged less than 30 years was 0:56; for ages 30 to 40 years is 3:21; for ages 41 to 50 years is 0.65; for ages 51 to 60 years is 3:12; and for those aged over 60 years is 9.99. This study is useful in determining the risk premium in homogeneous groups based on gender and age.
Estimating Parameters in Physical Models through Bayesian Inversion: A Complete Example
Allmaras, Moritz
2013-02-07
All mathematical models of real-world phenomena contain parameters that need to be estimated from measurements, either for realistic predictions or simply to understand the characteristics of the model. Bayesian statistics provides a framework for parameter estimation in which uncertainties about models and measurements are translated into uncertainties in estimates of parameters. This paper provides a simple, step-by-step example-starting from a physical experiment and going through all of the mathematics-to explain the use of Bayesian techniques for estimating the coefficients of gravity and air friction in the equations describing a falling body. In the experiment we dropped an object from a known height and recorded the free fall using a video camera. The video recording was analyzed frame by frame to obtain the distance the body had fallen as a function of time, including measures of uncertainty in our data that we describe as probability densities. We explain the decisions behind the various choices of probability distributions and relate them to observed phenomena. Our measured data are then combined with a mathematical model of a falling body to obtain probability densities on the space of parameters we seek to estimate. We interpret these results and discuss sources of errors in our estimation procedure. © 2013 Society for Industrial and Applied Mathematics.
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network.
Röding, Magnus; Zagato, Elisa; Remaut, Katrien; Braeckmans, Kevin
2016-06-01
We present an approximate Bayesian computation scheme for estimating number concentrations of monodisperse diffusing nanoparticles in suspension by optical particle tracking microscopy. The method is based on the probability distribution of the time spent by a particle inside a detection region. We validate the method on suspensions of well-controlled reference particles. We illustrate its usefulness with an application in gene therapy, applying the method to estimate number concentrations of plasmid DNA molecules and the average number of DNA molecules complexed with liposomal drug delivery particles.
Air Kerma Rate estimation by means of in-situ gamma spectrometry: a Bayesian approach.
Cabal, Gonzalo; Kluson, Jaroslav
2010-01-01
Bayesian inference is used to determine the Air Kerma Rate based on in-situ gamma spectrum measurement performed with an NaI(Tl) scintillation detector. The procedure accounts for uncertainties in the measurement and in the mass energy transfer coefficients needed for the calculation. The WinBUGS program (Spiegelhalter et al., 1999) was used. The results show that the relative uncertainties in the Air Kerma estimate are of about 1%, and that the choice of unfolding procedure may lead to an estimate systematic error of 3%.
Air Kerma Rate estimation by means of in-situ gamma spectrometry: A Bayesian approach
Energy Technology Data Exchange (ETDEWEB)
Cabal, Gonzalo [Department of Dosimetry and Applications of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Brehova 7, 115 19 Prague 1 (Czech Republic); Department of Radiation Dosimetry, Nuclear Physics Institute, Academy of Sciences of the Czech Republic, Na Truhlarce 39/64, 180 86 Prague 8 (Czech Republic)], E-mail: cabal@ujf.cas.cz; Kluson, Jaroslav [Department of Dosimetry and Applications of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Brehova 7, 115 19 Prague 1 (Czech Republic)
2010-04-15
Bayesian inference is used to determine the Air Kerma Rate based on in-situ gamma spectrum measurement performed with an NaI(Tl) scintillation detector. The procedure accounts for uncertainties in the measurement and in the mass energy transfer coefficients needed for the calculation. The WinBUGS program () was used. The results show that the relative uncertainties in the Air Kerma estimate are of about 1%, and that the choice of unfolding procedure may lead to an estimate systematic error of 3%.
Directory of Open Access Journals (Sweden)
Tsiachristos I.
2014-07-01
Full Text Available Using a vector network analyzer equipped with a calibrated rectangular wave guide the electric permittivity and the element of the magnetic permeability tensor for Y3Fe5O12, ZnFe2O4 and NiFe2O4 are measured. The electric permittivity can be estimated from the body resonances (d = nλ/2 if a sufficient long sample is used. The estimation of the magnetic permeability tensors’ parameters can be estimated comparing the experimental results with computer simulations using the magnetic properties of the materials as derived from the magnetic measurements.
Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.
2007-01-01
The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.
Gupta, Cherry; Cobre, Juliana; Polpo, Adriano; Sinha, Debjayoti
2016-09-01
Existing cure-rate survival models are generally not convenient for modeling and estimating the survival quantiles of a patient with specified covariate values. This paper proposes a novel class of cure-rate model, the transform-both-sides cure-rate model (TBSCRM), that can be used to make inferences about both the cure-rate and the survival quantiles. We develop the Bayesian inference about the covariate effects on the cure-rate as well as on the survival quantiles via Markov Chain Monte Carlo (MCMC) tools. We also show that the TBSCRM-based Bayesian method outperforms existing cure-rate models based methods in our simulation studies and in application to the breast cancer survival data from the National Cancer Institute's Surveillance, Epidemiology, and End Results (SEER) database.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Higdon, Dave; Schunck, Nicolas; Sarich, Jason; Wild, Stefan M
2014-01-01
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we pr...
A Bayesian method to estimate the neutron response matrix of a single crystal CVD diamond detector
Energy Technology Data Exchange (ETDEWEB)
Reginatto, Marcel; Araque, Jorge Guerrero; Nolte, Ralf; Zbořil, Miroslav; Zimbal, Andreas [Physikalisch-Technische Bundesanstalt, D-38116 Braunschweig (Germany); Gagnon-Moisan, Francis [Paul Scherrer Institut, CH-5232 Villigen (Switzerland)
2015-01-13
Detectors made from artificial chemical vapor deposition (CVD) single crystal diamond are very promising candidates for applications where high resolution neutron spectrometry in very high neutron fluxes is required, for example in fusion research. We propose a Bayesian method to estimate the neutron response function of the detector for a continuous range of neutron energies (in our case, 10 MeV ≤ E{sub n} ≤ 16 MeV) based on a few measurements with quasi-monoenergetic neutrons. This method is needed because a complete set of measurements is not available and the alternative approach of using responses based on Monte Carlo calculations is not feasible. Our approach uses Bayesian signal-background separation techniques and radial basis function interpolation methods. We present the analysis of data measured at the PTB accelerator facility PIAF. The method is quite general and it can be applied to other particle detectors with similar characteristics.
Bayesian estimation of the Modified Omori Law parameters for the Iranian Plateau
Ommi, S.; Zafarani, H.; Smirnov, V. B.
2016-07-01
The forecasting of large aftershocks is a preliminary and critical step in seismic hazard analysis and seismic risk management. From a statistical point of view, it relies entirely on the estimation of the properties of aftershock sequences using a set of laws with well-defined parameters. Since the frequentist and Bayesian approaches are common tools to assess these parameter values, we compare the two approaches for the Modified Omori Law and a selection of mainshock-aftershock sequences in the Iranian Plateau. There is a general agreement between the two methods, but the Bayesian appears to be more efficient as the number of recorded aftershocks decreases. Taking into account temporal variations of the b-value, the slope of the frequency-size distribution, the probability for the occurrence of strong aftershock, or larger main shock has been calculated in a finite time window using the parameters of the Modified Omori Law observed in the Iranian Plateau.
Buyukada, Musa
2017-05-01
The main purpose of the present study was to incorporate the uncertainties in the thermal behavior of walnut hull (WH), lignite coal, and their various blends using Bayesian approach. First of all, thermal behavior of related materials were investigated under different temperatures, blend ratios, and heating rates. Results of ultimate and proximate analyses showed the main steps of oxidation mechanism of (co-)combustion process. Thermal degradation started with the (hemi-)cellulosic compounds and finished with lignin. Finally, a partial sensitivity analysis based on Bayesian approach (Markov Chain Monte Carlo simulations) were applied to data driven regression model (the best fit). The main purpose of uncertainty analysis was to point out the importance of operating conditions (explanatory variables). The other important aspect of the present work was the first performance evaluation study on various uncertainty estimation techniques in (co-)combustion literature.
Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W
2007-07-01
Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.
Angel estimation via frequency diversity of the SIAR radar based on Bayesian theory
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The orthogonal signals of multi-carrier-frequency emission and multiple antennas receipt module are used in SIAR radar.The corresponding received echo is equivalent to non-uniform spatial sampling after the frequency diversity process.As using the traditional Fourier transform will result in the target spectral with large sidelobe,the method presented in this paper firstly makes the preordering treatment for the position of the received antenna.Then,the Bayesian maximum posteriori estimation with l2-norm weighted constraint is utilized to achieve the equivalent uniform array echo.The simulations present the spectrum estimation in angle precision estimation of multiple targets under different SNRs,different virtual antenna numbers and different elevations.The estimation results confirm the advantage of SIAR radar both in array expansion and angle estimation.
Bayesian Switching Factor Analysis for Estimating Time-varying Functional Connectivity in fMRI.
Taghia, Jalil; Ryali, Srikanth; Chen, Tianwen; Supekar, Kaustubh; Cai, Weidong; Menon, Vinod
2017-03-03
There is growing interest in understanding the dynamical properties of functional interactions between distributed brain regions. However, robust estimation of temporal dynamics from functional magnetic resonance imaging (fMRI) data remains challenging due to limitations in extant multivariate methods for modeling time-varying functional interactions between multiple brain areas. Here, we develop a Bayesian generative model for fMRI time-series within the framework of hidden Markov models (HMMs). The model is a dynamic variant of the static factor analysis model (Ghahramani and Beal, 2000). We refer to this model as Bayesian switching factor analysis (BSFA) as it integrates factor analysis into a generative HMM in a unified Bayesian framework. In BSFA, brain dynamic functional networks are represented by latent states which are learnt from the data. Crucially, BSFA is a generative model which estimates the temporal evolution of brain states and transition probabilities between states as a function of time. An attractive feature of BSFA is the automatic determination of the number of latent states via Bayesian model selection arising from penalization of excessively complex models. Key features of BSFA are validated using extensive simulations on carefully designed synthetic data. We further validate BSFA using fingerprint analysis of multisession resting-state fMRI data from the Human Connectome Project (HCP). Our results show that modeling temporal dependencies in the generative model of BSFA results in improved fingerprinting of individual participants. Finally, we apply BSFA to elucidate the dynamic functional organization of the salience, central-executive, and default mode networks-three core neurocognitive systems with central role in cognitive and affective information processing (Menon, 2011). Across two HCP sessions, we demonstrate a high level of dynamic interactions between these networks and determine that the salience network has the highest temporal
Lowman, L.; Barros, A. P.
2014-12-01
Computational modeling of surface erosion processes is inherently difficult because of the four-dimensional nature of the problem and the multiple temporal and spatial scales that govern individual mechanisms. Landscapes are modified via surface and fluvial erosion and exhumation, each of which takes place over a range of time scales. Traditional field measurements of erosion/exhumation rates are scale dependent, often valid for a single point-wise location or averaging over large aerial extents and periods with intense and mild erosion. We present a method of remotely estimating erosion rates using a Bayesian hierarchical model based upon the stream power erosion law (SPEL). A Bayesian approach allows for estimating erosion rates using the deterministic relationship given by the SPEL and data on channel slopes and precipitation at the basin and sub-basin scale. The spatial scale associated with this framework is the elevation class, where each class is characterized by distinct morphologic behavior observed through different modes in the distribution of basin outlet elevations. Interestingly, the distributions of first-order outlets are similar in shape and extent to the distribution of precipitation events (i.e. individual storms) over a 14-year period between 1998-2011. We demonstrate an application of the Bayesian hierarchical modeling framework for five basins and one intermontane basin located in the central Andes between 5S and 20S. Using remotely sensed data of current annual precipitation rates from the Tropical Rainfall Measuring Mission (TRMM) and topography from a high resolution (3 arc-seconds) digital elevation map (DEM), our erosion rate estimates are consistent with decadal-scale estimates based on landslide mapping and sediment flux observations and 1-2 orders of magnitude larger than most millennial and million year timescale estimates from thermochronology and cosmogenic nuclides.
A Bayesian framework to estimate diversification rates and their variation through time and space
Directory of Open Access Journals (Sweden)
Silvestro Daniele
2011-10-01
Full Text Available Abstract Background Patterns of species diversity are the result of speciation and extinction processes, and molecular phylogenetic data can provide valuable information to derive their variability through time and across clades. Bayesian Markov chain Monte Carlo methods offer a promising framework to incorporate phylogenetic uncertainty when estimating rates of diversification. Results We introduce a new approach to estimate diversification rates in a Bayesian framework over a distribution of trees under various constant and variable rate birth-death and pure-birth models, and test it on simulated phylogenies. Furthermore, speciation and extinction rates and their posterior credibility intervals can be estimated while accounting for non-random taxon sampling. The framework is particularly suitable for hypothesis testing using Bayes factors, as we demonstrate analyzing dated phylogenies of Chondrostoma (Cyprinidae and Lupinus (Fabaceae. In addition, we develop a model that extends the rate estimation to a meta-analysis framework in which different data sets are combined in a single analysis to detect general temporal and spatial trends in diversification. Conclusions Our approach provides a flexible framework for the estimation of diversification parameters and hypothesis testing while simultaneously accounting for uncertainties in the divergence times and incomplete taxon sampling.
Software Development Effort Estimation using Fuzzy Bayesian Belief Network with COCOMO II
Directory of Open Access Journals (Sweden)
B.Chakraborty
2015-01-01
Full Text Available Software development has always been characterized by some metrics. One of the greatest challenges for software developers lies in predicting the development effort for a software system which is based on developer abilities, size, complexity and other metrics. Several algorithmic cost estimation models such as Boehm?s COCOMO, Albrecht's' Function Point Analysis, Putnam?s SLIM, ESTIMACS etc. are available but every model has its own pros and cons in estimating development cost and effort. Most common reason being project data which is available in the initial stages of project is often incomplete, inconsistent, uncertain and unclear. In this paper, Bayesian probabilistic model has been explored to overcome the problems of uncertainty and imprecision resulting in improved process of software development effort estimation. This paper considers a software estimation approach using six key cost drivers in COCOMO II model. The selected cost drivers are the inputs to systems. The concept of Fuzzy Bayesian Belief Network (FBBN has been introduced to improve the accuracy of the estimation. Results shows that the value of MMRE (Mean of Magnitude of Relative Error and PRED obtained by means of FBBN is much better as compared to the MMRE and PRED of Fuzzy COCOMO II models. The validation of results was carried out on NASA-93 dem COCOMO II dataset.
Miao, Zhiyong; Shi, Hongyang; Zhang, Yi; Xu, Fan
2017-10-01
In this paper, a new variational Bayesian adaptive cubature Kalman filter (VBACKF) is proposed for nonlinear state estimation. Although the conventional VBACKF performs better than cubature Kalman filtering (CKF) in solving nonlinear systems with time-varying measurement noise, its performance may degrade due to the uncertainty of the system model. To overcome this drawback, a multilayer feed-forward neural network (MFNN) is used to aid the conventional VBACKF, generalizing it to attain higher estimation accuracy and robustness. In the proposed neural-network-aided variational Bayesian adaptive cubature Kalman filter (NN-VBACKF), the MFNN is used to turn the state estimation of the VBACKF adaptively, and it is used for both state estimation and in the online training paradigm simultaneously. To evaluate the performance of the proposed method, it is compared with CKF and VBACKF via target tracking problems. The simulation results demonstrate that the estimation accuracy and robustness of the proposed method are better than those of the CKF and VBACKF.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.
2017-02-01
This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic
Verdoolaege, G.; Von Hellermann, M. G.; Jaspers, R.; Ichir, M. M.; Van Oost, G.
2006-11-01
The validation of diagnostic date from a nuclear fusion experiment is an important issue. The concept of an Integrated Data Analysis (IDA) allows the consistent estimation of plasma parameters from heterogeneous data sets. Here, the determination of the ion effective charge (Zeff) is considered. Several diagnostic methods exist for the determination of Zeff, but the results are in general not in agreement. In this work, the problem of Zeff estimation on the TEXTOR tokamak is approached from the perspective of IDA, in the framework of Bayesian probability theory. The ultimate goal is the estimation of a full Zeff profile that is consistent both with measured bremsstrahlung emissivities, as well as individual impurity spectral line intensities obtained from Charge Exchange Recombination Spectroscopy (CXRS). We present an overview of the various uncertainties that enter the calculation of a Zeff profile from bremsstrahlung date on the one hand, and line intensity data on the other hand. We discuss a simple linear and nonlinear Bayesian model permitting the estimation of a central value for Zeff and the electron density ne on TEXTOR from bremsstrahlung emissivity measurements in the visible, and carbon densities derived from CXRS. Both the central Zeff and ne are sampled using an MCMC algorithm. An outlook is given towards possible model improvements.
Explicit form of the Bayesian posterior estimate of a quantum state under the uninformative prior
Shchesnovich, V S
2014-01-01
An analytical solution for the posterior estimate in Bayesian tomography of the unknown quantum state of an arbitrary quantum system (with a finite-dimensional Hilbert space) is found. First, we derive the Bayesian estimate for a pure quantum state measured by a set of arbitrary rank-1 POVMs under the uninformative (i.e. the unitary invariant or Haar) prior. The expression for the estimate involves the matrix permanents of the Gram matrices with repeated rows and columns, with the matrix elements being the scalar products of vectors giving the measurement outcomes. Second, an unknown mixed state is treated by the Hilbert-Schmidt purification. In this case, under the uninformative prior for the combined pure state, the posterior estimate of the mixed state of the system is expressed through the matrix $\\alpha$-permanents of the Gram matrices of scalar products of vectors giving the measurement outcomes. In the mixed case, there is also a free integer parameter -- the Schmidt number -- which can be used to opti...
Rubtsov, Denis V; Griffin, Julian L
2007-10-01
The problem of model detection and parameter estimation for noisy signals arises in different areas of science and engineering including audio processing, seismology, electrical engineering, and NMR spectroscopy. We have adopted the Bayesian modeling framework to jointly detect and estimate signal resonances. This considers a model of the time-domain complex free induction decay (FID) signal as a sum of exponentially damped sinusoidal components. The number of model components and component parameters are considered unknown random variables to be estimated. A Reversible Jump Markov Chain Monte Carlo technique is used to draw samples from the joint posterior distribution on the subspaces of different dimensions. The proposed algorithm has been tested on synthetic data, the (1)H NMR FID of a standard of L-glutamic acid and a blood plasma sample. The detection and estimation performance is compared with Akaike information criterion (AIC), minimum description length (MDL) and the matrix pencil method. The results show the Bayesian algorithm superior in performance especially in difficult cases of detecting low-amplitude and strongly overlapping resonances in noisy signals.
Scheibehenne, Benjamin; Pachur, Thorsten
2015-04-01
To be useful, cognitive models with fitted parameters should show generalizability across time and allow accurate predictions of future observations. It has been proposed that hierarchical procedures yield better estimates of model parameters than do nonhierarchical, independent approaches, because the formers' estimates for individuals within a group can mutually inform each other. Here, we examine Bayesian hierarchical approaches to evaluating model generalizability in the context of two prominent models of risky choice-cumulative prospect theory (Tversky & Kahneman, 1992) and the transfer-of-attention-exchange model (Birnbaum & Chavez, 1997). Using empirical data of risky choices collected for each individual at two time points, we compared the use of hierarchical versus independent, nonhierarchical Bayesian estimation techniques to assess two aspects of model generalizability: parameter stability (across time) and predictive accuracy. The relative performance of hierarchical versus independent estimation varied across the different measures of generalizability. The hierarchical approach improved parameter stability (in terms of a lower absolute discrepancy of parameter values across time) and predictive accuracy (in terms of deviance; i.e., likelihood). With respect to test-retest correlations and posterior predictive accuracy, however, the hierarchical approach did not outperform the independent approach. Further analyses suggested that this was due to strong correlations between some parameters within both models. Such intercorrelations make it difficult to identify and interpret single parameters and can induce high degrees of shrinkage in hierarchical models. Similar findings may also occur in the context of other cognitive models of choice.
Bayesian inference in genetic parameter estimation of visual scores in Nellore beef-cattle
2009-01-01
The aim of this study was to estimate the components of variance and genetic parameters for the visual scores which constitute the Morphological Evaluation System (MES), such as body structure (S), precocity (P) and musculature (M) in Nellore beef-cattle at the weaning and yearling stages, by using threshold Bayesian models. The information used for this was gleaned from visual scores of 5,407 animals evaluated at the weaning and 2,649 at the yearling stages. The genetic parameters for visual score traits were estimated through two-trait analysis, using the threshold animal model, with Bayesian statistics methodology and MTGSAM (Multiple Trait Gibbs Sampler for Animal Models) threshold software. Heritability estimates for S, P and M were 0.68, 0.65 and 0.62 (at weaning) and 0.44, 0.38 and 0.32 (at the yearling stage), respectively. Heritability estimates for S, P and M were found to be high, and so it is expected that these traits should respond favorably to direct selection. The visual scores evaluated at the weaning and yearling stages might be used in the composition of new selection indexes, as they presented sufficient genetic variability to promote genetic progress in such morphological traits. PMID:21637450
Institute of Scientific and Technical Information of China (English)
韩明
2013-01-01
作者以前提出了一种新的参数估计方法——E-Bayes估计法,对二项分布的可靠度,给出了E-Bayes估计的定义、E-Bayes估计和多层Bayes估计公式,但没有给出E-Bayes估计的性质.该文给出了二项分布可靠度F-Bayes估计的性质.%Previously, the author introduces a new parameter estimation method-E-Bayesian estimation method, to estimate the reliability derived form Binomial distribution, the definition of E-Bayesian estimation of the reliability is provided; moreover, formulas of E-Bayesian estimation and hierarchical Bayesian estimation for the reliability are also provided, but the author did not provide propertiy of E-Bayesian estimation. This paper, properties of E-Bayesian estimation are provided.
Bayesian Estimation and Prediction for Flexible Weibull Model under Type-II Censoring Scheme
Directory of Open Access Journals (Sweden)
Sanjay Kumar Singh
2013-01-01
Full Text Available We have developed the Bayesian estimation procedure for flexible Weibull distribution under Type-II censoring scheme assuming Jeffrey's scale invariant (noninformative and Gamma (informative priors for the model parameters. The interval estimation for the model parameters has been performed through normal approximation, bootstrap, and highest posterior density (HPD procedures. Further, we have also derived the predictive posteriors and the corresponding predictive survival functions for the future observations based on Type-II censored data from the flexible Weibull distribution. Since the predictive posteriors are not in the closed form, we proposed to use the Monte Carlo Markov chain (MCMC methods to approximate the posteriors of interest. The performance of the Bayes estimators has also been compared with the classical estimators of the model parameters through the Monte Carlo simulation study. A real data set representing the time between failures of secondary reactor pumps has been analysed for illustration purpose.
Bayesian Model Averaging for Ensemble-Based Estimates of Solvation Free Energies
Gosink, Luke J; Reehl, Sarah M; Whitney, Paul D; Mobley, David L; Baker, Nathan A
2016-01-01
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improv...
Estimation of temporal gait parameters using Bayesian models on acceleration signals.
López-Nava, I H; Muñoz-Meléndez, A; Pérez Sanpablo, A I; Alessi Montero, A; Quiñones Urióstegui, I; Núñez Carrera, L
2016-01-01
The purpose of this study is to develop a system capable of performing calculation of temporal gait parameters using two low-cost wireless accelerometers and artificial intelligence-based techniques as part of a larger research project for conducting human gait analysis. Ten healthy subjects of different ages participated in this study and performed controlled walking tests. Two wireless accelerometers were placed on their ankles. Raw acceleration signals were processed in order to obtain gait patterns from characteristic peaks related to steps. A Bayesian model was implemented to classify the characteristic peaks into steps or nonsteps. The acceleration signals were segmented based on gait events, such as heel strike and toe-off, of actual steps. Temporal gait parameters, such as cadence, ambulation time, step time, gait cycle time, stance and swing phase time, simple and double support time, were estimated from segmented acceleration signals. Gait data-sets were divided into two groups of ages to test Bayesian models in order to classify the characteristic peaks. The mean error obtained from calculating the temporal gait parameters was 4.6%. Bayesian models are useful techniques that can be applied to classification of gait data of subjects at different ages with promising results.
A fuzzy Bayesian approach to flood frequency estimation with imprecise historical information
Salinas, José Luis; Kiss, Andrea; Viglione, Alberto; Viertl, Reinhard; Blöschl, Günter
2016-09-01
This paper presents a novel framework that links imprecision (through a fuzzy approach) and stochastic uncertainty (through a Bayesian approach) in estimating flood probabilities from historical flood information and systematic flood discharge data. The method exploits the linguistic characteristics of historical source material to construct membership functions, which may be wider or narrower, depending on the vagueness of the statements. The membership functions are either included in the prior distribution or the likelihood function to obtain a fuzzy version of the flood frequency curve. The viability of the approach is demonstrated by three case studies that differ in terms of their hydromorphological conditions (from an Alpine river with bedrock profile to a flat lowland river with extensive flood plains) and historical source material (including narratives, town and county meeting protocols, flood marks and damage accounts). The case studies are presented in order of increasing fuzziness (the Rhine at Basel, Switzerland; the Werra at Meiningen, Germany; and the Tisza at Szeged, Hungary). Incorporating imprecise historical information is found to reduce the range between the 5% and 95% Bayesian credibility bounds of the 100 year floods by 45% and 61% for the Rhine and Werra case studies, respectively. The strengths and limitations of the framework are discussed relative to alternative (non-fuzzy) methods. The fuzzy Bayesian inference framework provides a flexible methodology that fits the imprecise nature of linguistic information on historical floods as available in historical written documentation.
A method for Bayesian estimation of the probability of local intensity for some cities in Japan
Directory of Open Access Journals (Sweden)
G. C. Koravos
2002-06-01
Full Text Available Seismic hazard in terms of probability of exceedance of a given intensity in a given time span,was assessed for 12 sites in Japan.The method does not use any attenuation law.Instead,the dependence of local intensity on epicentral intensity I 0 is calculated directly from the data,using a Bayesian model.According to this model (Meroni et al., 1994,local intensity follows the binomial distribution with parameters (I 0 ,p .The parameter p is considered as a random variable following the Beta distribution.This manner of Bayesian estimates of p are assessed for various values of epicentral intensity and epicentral distance.In order to apply this model for the assessment of seismic hazard,the area under consideration is divided into seismic sources (zonesof known seismicity.The contribution of each source on the seismic hazard at every site is calculated according to the Bayesian model and the result is the combined effect of all the sources.High probabilities of exceedance were calculated for the sites that are in the central part of the country,with hazard decreasing slightly towards the north and the south parts.
Arenson, Ethan A.
2009-01-01
One of the problems inherent in variance component estimation centers around inadmissible estimates. Such estimates occur when there is more variability within groups, relative to between groups. This paper suggests a Bayesian approach to resolve inadmissibility by placing noninformative inverse-gamma priors on the variance components, and…
Bayesian estimation in IRT models with missing values in background variables
Directory of Open Access Journals (Sweden)
Christian Aßmann
2015-12-01
Full Text Available Large scale assessment studies typically aim at investigating the relationship between persons competencies and explaining variables. Individual competencies are often estimated by explicitly including explaining background variables into corresponding Item Response Theory models. Since missing values in background variables inevitably occur, strategies to handle the uncertainty related to missing values in parameter estimation are required. We propose to adapt a Bayesian estimation strategy based on Markov Chain Monte Carlo techniques. Sampling from the posterior distribution of parameters is thereby enriched by sampling from the full conditional distribution of the missing values. We consider non-parametric as well as parametric approximations for the full conditional distributions of missing values, thus allowing for a flexible incorporation of metric as well as categorical background variables. We evaluate the validity of our approach with respect to statistical accuracy by a simulation study controlling the missing values generating mechanism. We show that the proposed Bayesian strategy allows for effective comparison of nested model specifications via gauging highest posterior density intervals of all involved model parameters. An illustration of the suggested approach uses data from the National Educational Panel Study on mathematical competencies of fifth grade students.
EEG-fMRI Bayesian framework for neural activity estimation: a simulation study
Croce, Pierpaolo; Basti, Alessio; Marzetti, Laura; Zappasodi, Filippo; Del Gratta, Cosimo
2016-12-01
Objective. Due to the complementary nature of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and given the possibility of simultaneous acquisition, the joint data analysis can afford a better understanding of the underlying neural activity estimation. In this simulation study we want to show the benefit of the joint EEG-fMRI neural activity estimation in a Bayesian framework. Approach. We built a dynamic Bayesian framework in order to perform joint EEG-fMRI neural activity time course estimation. The neural activity is originated by a given brain area and detected by means of both measurement techniques. We have chosen a resting state neural activity situation to address the worst case in terms of the signal-to-noise ratio. To infer information by EEG and fMRI concurrently we used a tool belonging to the sequential Monte Carlo (SMC) methods: the particle filter (PF). Main results. First, despite a high computational cost, we showed the feasibility of such an approach. Second, we obtained an improvement in neural activity reconstruction when using both EEG and fMRI measurements. Significance. The proposed simulation shows the improvements in neural activity reconstruction with EEG-fMRI simultaneous data. The application of such an approach to real data allows a better comprehension of the neural dynamics.
Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The Joint-EnKF directly updates the augmented state-parameter vector while the Dual-EnKF employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. In this paper, we reverse the order of the forecast-update steps following the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem, based on which we propose a new dual EnKF scheme, the Dual-EnKF$_{\\rm OSA}$. Compared to the Dual-EnKF, this introduces a new update step to the state in a fully consistent Bayesian framework...
Bayesian estimation of predator diet composition from fatty acids and stable isotopes
Directory of Open Access Journals (Sweden)
Philipp Neubauer
2015-04-01
Full Text Available Quantitative analysis of stable isotopes (SI and, more recently, fatty acid profiles (FAP are useful and complementary tools for estimating the relative contribution of different prey items in the diet of a predator. The combination of these two approaches, however, has thus far been limited and qualitative. We propose a mixing model for FAP that follows the Bayesian machinery employed in state-of-the-art mixing models for SI. This framework provides both point estimates and probability distributions for individual and population level diet proportions. Where fat content and conversion coefficients are available, they can be used to improve diet estimates. This model can be explicitly integrated with analogous models for SI to increase resolution and clarify predator–prey relationships. We apply our model to simulated data and an experimental dataset that allows us to illustrate modeling strategies and demonstrate model performance. Our methods are provided as an open source software package for the statistical computing environment R.
Bayesian Estimation for Land Surface Temperature Retrieval: The Nuisance of Emissivities
Morgan, J A
2004-01-01
An approach to the remote sensing of land surface temperature is developed using the methods of Bayesian inference. The starting point is the maximum entropy estimate for the posterior distribution of radiance in multiple bands. In order to convert this quantity to an estimator for surface temperature and emissivity with Bayes' theorem, it is necessary to obtain the joint prior probability for surface temperature and emissivity, given available prior knowledge. The requirement that any pair of distinct observers be able to relate their descriptions of radiance under arbitrary Lorentz transformations uniquely determines the prior probability. Perhaps surprisingly, surface temperature acts as a scale parameter, while emissivity acts as a location parameter, giving the prior probability P(T,emissivity|K)=const./T dT d(emissivity). Given this result, it is a simple matter to construct estimators for surface temperature and emssivity. Monte Carlo simulations of land surface temeprature retrieval in selected MODIS ...
Bayesian nonparametric estimation and consistency of mixed multinomial logit choice models
De Blasi, Pierpaolo; Lau, John W; 10.3150/09-BEJ233
2011-01-01
This paper develops nonparametric estimation for discrete choice models based on the mixed multinomial logit (MMNL) model. It has been shown that MMNL models encompass all discrete choice models derived under the assumption of random utility maximization, subject to the identification of an unknown distribution $G$. Noting the mixture model description of the MMNL, we employ a Bayesian nonparametric approach, using nonparametric priors on the unknown mixing distribution $G$, to estimate choice probabilities. We provide an important theoretical support for the use of the proposed methodology by investigating consistency of the posterior distribution for a general nonparametric prior on the mixing distribution. Consistency is defined according to an $L_1$-type distance on the space of choice probabilities and is achieved by extending to a regression model framework a recent approach to strong consistency based on the summability of square roots of prior probabilities. Moving to estimation, slightly different te...
Heart rate variability estimation in photoplethysmography signals using Bayesian learning approach
Alwosheel, Ahmad; Alasaad, Amr
2016-01-01
Heart rate variability (HRV) has become a marker for various health and disease conditions. Photoplethysmography (PPG) sensors integrated in wearable devices such as smart watches and phones are widely used to measure heart activities. HRV requires accurate estimation of time interval between consecutive peaks in the PPG signal. However, PPG signal is very sensitive to motion artefact which may lead to poor HRV estimation if false peaks are detected. In this Letter, the authors propose a probabilistic approach based on Bayesian learning to better estimate HRV from PPG signal recorded by wearable devices and enhance the performance of the automatic multi scale-based peak detection (AMPD) algorithm used for peak detection. The authors’ experiments show that their approach enhances the performance of the AMPD algorithm in terms of number of HRV related metrics such as sensitivity, positive predictive value, and average temporal resolution. PMID:27382483
Super-Resolution Using Hidden Markov Model and Bayesian Detection Estimation Framework
Directory of Open Access Journals (Sweden)
Humblot Fabrice
2006-01-01
Full Text Available This paper presents a new method for super-resolution (SR reconstruction of a high-resolution (HR image from several low-resolution (LR images. The HR image is assumed to be composed of homogeneous regions. Thus, the a priori distribution of the pixels is modeled by a finite mixture model (FMM and a Potts Markov model (PMM for the labels. The whole a priori model is then a hierarchical Markov model. The LR images are assumed to be obtained from the HR image by lowpass filtering, arbitrarily translation, decimation, and finally corruption by a random noise. The problem is then put in a Bayesian detection and estimation framework, and appropriate algorithms are developed based on Markov chain Monte Carlo (MCMC Gibbs sampling. At the end, we have not only an estimate of the HR image but also an estimate of the classification labels which leads to a segmentation result.
Ait-El-Fquih, Boujemaa; El Gharamti, Mohamad; Hoteit, Ibrahim
2016-08-01
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface groundwater models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model's state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKFOSA. Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25 % more accurate state and parameter estimations than the joint and dual approaches.
Ait-El-Fquih, Boujemaa
2016-08-12
Ensemble Kalman filtering (EnKF) is an efficient approach to addressing uncertainties in subsurface ground-water models. The EnKF sequentially integrates field data into simulation models to obtain a better characterization of the model\\'s state and parameters. These are generally estimated following joint and dual filtering strategies, in which, at each assimilation cycle, a forecast step by the model is followed by an update step with incoming observations. The joint EnKF directly updates the augmented state-parameter vector, whereas the dual EnKF empirically employs two separate filters, first estimating the parameters and then estimating the state based on the updated parameters. To develop a Bayesian consistent dual approach and improve the state-parameter estimates and their consistency, we propose in this paper a one-step-ahead (OSA) smoothing formulation of the state-parameter Bayesian filtering problem from which we derive a new dual-type EnKF, the dual EnKF(OSA). Compared with the standard dual EnKF, it imposes a new update step to the state, which is shown to enhance the performance of the dual approach with almost no increase in the computational cost. Numerical experiments are conducted with a two-dimensional (2-D) synthetic groundwater aquifer model to investigate the performance and robustness of the proposed dual EnKFOSA, and to evaluate its results against those of the joint and dual EnKFs. The proposed scheme is able to successfully recover both the hydraulic head and the aquifer conductivity, providing further reliable estimates of their uncertainties. Furthermore, it is found to be more robust to different assimilation settings, such as the spatial and temporal distribution of the observations, and the level of noise in the data. Based on our experimental setups, it yields up to 25% more accurate state and parameter estimations than the joint and dual approaches.
Bayesian Nonparametric Mixture Estimation for Time-Indexed Functional Data in R
Directory of Open Access Journals (Sweden)
Terrance D. Savitsky
2016-08-01
Full Text Available We present growfunctions for R that offers Bayesian nonparametric estimation models for analysis of dependent, noisy time series data indexed by a collection of domains. This data structure arises from combining periodically published government survey statistics, such as are reported in the Current Population Study (CPS. The CPS publishes monthly, by-state estimates of employment levels, where each state expresses a noisy time series. Published state-level estimates from the CPS are composed from household survey responses in a model-free manner and express high levels of volatility due to insufficient sample sizes. Existing software solutions borrow information over a modeled time-based dependence to extract a de-noised time series for each domain. These solutions, however, ignore the dependence among the domains that may be additionally leveraged to improve estimation efficiency. The growfunctions package offers two fully nonparametric mixture models that simultaneously estimate both a time and domain-indexed dependence structure for a collection of time series: (1 A Gaussian process (GP construction, which is parameterized through the covariance matrix, estimates a latent function for each domain. The covariance parameters of the latent functions are indexed by domain under a Dirichlet process prior that permits estimation of the dependence among functions across the domains: (2 An intrinsic Gaussian Markov random field prior construction provides an alternative to the GP that expresses different computation and estimation properties. In addition to performing denoised estimation of latent functions from published domain estimates, growfunctions allows estimation of collections of functions for observation units (e.g., households, rather than aggregated domains, by accounting for an informative sampling design under which the probabilities for inclusion of observation units are related to the response variable. growfunctions includes plot
Freni, Gabriele; Mannina, Giorgio
In urban drainage modelling, uncertainty analysis is of undoubted necessity. However, uncertainty analysis in urban water-quality modelling is still in its infancy and only few studies have been carried out. Therefore, several methodological aspects still need to be experienced and clarified especially regarding water quality modelling. The use of the Bayesian approach for uncertainty analysis has been stimulated by its rigorous theoretical framework and by the possibility of evaluating the impact of new knowledge on the modelling predictions. Nevertheless, the Bayesian approach relies on some restrictive hypotheses that are not present in less formal methods like the Generalised Likelihood Uncertainty Estimation (GLUE). One crucial point in the application of Bayesian method is the formulation of a likelihood function that is conditioned by the hypotheses made regarding model residuals. Statistical transformations, such as the use of Box-Cox equation, are generally used to ensure the homoscedasticity of residuals. However, this practice may affect the reliability of the analysis leading to a wrong uncertainty estimation. The present paper aims to explore the influence of the Box-Cox equation for environmental water quality models. To this end, five cases were considered one of which was the “real” residuals distributions (i.e. drawn from available data). The analysis was applied to the Nocella experimental catchment (Italy) which is an agricultural and semi-urbanised basin where two sewer systems, two wastewater treatment plants and a river reach were monitored during both dry and wet weather periods. The results show that the uncertainty estimation is greatly affected by residual transformation and a wrong assumption may also affect the evaluation of model uncertainty. The use of less formal methods always provide an overestimation of modelling uncertainty with respect to Bayesian method but such effect is reduced if a wrong assumption is made regarding the
Evolution of the cerebellum as a neuronal machine for Bayesian state estimation
Paulin, M. G.
2005-09-01
The cerebellum evolved in association with the electric sense and vestibular sense of the earliest vertebrates. Accurate information provided by these sensory systems would have been essential for precise control of orienting behavior in predation. A simple model shows that individual spikes in electrosensory primary afferent neurons can be interpreted as measurements of prey location. Using this result, I construct a computational neural model in which the spatial distribution of spikes in a secondary electrosensory map forms a Monte Carlo approximation to the Bayesian posterior distribution of prey locations given the sense data. The neural circuit that emerges naturally to perform this task resembles the cerebellar-like hindbrain electrosensory filtering circuitry of sharks and other electrosensory vertebrates. The optimal filtering mechanism can be extended to handle dynamical targets observed from a dynamical platform; that is, to construct an optimal dynamical state estimator using spiking neurons. This may provide a generic model of cerebellar computation. Vertebrate motion-sensing neurons have specific fractional-order dynamical characteristics that allow Bayesian state estimators to be implemented elegantly and efficiently, using simple operations with asynchronous pulses, i.e. spikes. The computational neural models described in this paper represent a novel kind of particle filter, using spikes as particles. The models are specific and make testable predictions about computational mechanisms in cerebellar circuitry, while providing a plausible explanation of cerebellar contributions to aspects of motor control, perception and cognition.
Hierarchical Bayesian methods for estimation of parameters in a longitudinal HIV dynamic system.
Huang, Yangxin; Liu, Dacheng; Wu, Hulin
2006-06-01
HIV dynamics studies have significantly contributed to the understanding of HIV infection and antiviral treatment strategies. But most studies are limited to short-term viral dynamics due to the difficulty of establishing a relationship of antiviral response with multiple treatment factors such as drug exposure and drug susceptibility during long-term treatment. In this article, a mechanism-based dynamic model is proposed for characterizing long-term viral dynamics with antiretroviral therapy, described by a set of nonlinear differential equations without closed-form solutions. In this model we directly incorporate drug concentration, adherence, and drug susceptibility into a function of treatment efficacy, defined as an inhibition rate of virus replication. We investigate a Bayesian approach under the framework of hierarchical Bayesian (mixed-effects) models for estimating unknown dynamic parameters. In particular, interest focuses on estimating individual dynamic parameters. The proposed methods not only help to alleviate the difficulty in parameter identifiability, but also flexibly deal with sparse and unbalanced longitudinal data from individual subjects. For illustration purposes, we present one simulation example to implement the proposed approach and apply the methodology to a data set from an AIDS clinical trial. The basic concept of the longitudinal HIV dynamic systems and the proposed methodologies are generally applicable to any other biomedical dynamic systems.
Directory of Open Access Journals (Sweden)
Pretorius Albertus
2003-03-01
Full Text Available Abstract In the case of the mixed linear model the random effects are usually assumed to be normally distributed in both the Bayesian and classical frameworks. In this paper, the Dirichlet process prior was used to provide nonparametric Bayesian estimates for correlated random effects. This goal was achieved by providing a Gibbs sampler algorithm that allows these correlated random effects to have a nonparametric prior distribution. A sampling based method is illustrated. This method which is employed by transforming the genetic covariance matrix to an identity matrix so that the random effects are uncorrelated, is an extension of the theory and the results of previous researchers. Also by using Gibbs sampling and data augmentation a simulation procedure was derived for estimating the precision parameter M associated with the Dirichlet process prior. All needed conditional posterior distributions are given. To illustrate the application, data from the Elsenburg Dormer sheep stud were analysed. A total of 3325 weaning weight records from the progeny of 101 sires were used.
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
mBEEF: An accurate semi-local Bayesian error estimation density functional
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
Estimation of under-reported visceral Leishmaniasis (Vl cases in Bihar: a Bayesian approach
Directory of Open Access Journals (Sweden)
A Ranjan
2013-12-01
Full Text Available Background: Visceral leishmaniasis (VL is a major health problem in the state of Bihar and adjoining areas in India. In absence of any active surveillance mechanism for the disease, there seems to be gross under-reporting of VL cases. Objective: The objective of this study was to estimate extent of under-reporting of VL cases in Bihar using pooled analysis of published papers. Method: We calculated the pooled common ratio (RRMH based on three studies and combined it with a prior distribution of ratio using inverse-variance weighting method. Bayesian method was used to estimate the posterior distribution of the “under-reporting factor” (ratio of unreported to reported cases. Results: The posterior distribution of ratio of unreported to reported cases yielded a mean of 3.558, with 95% posterior limits of 2.81 and 4.50. Conclusion: Bayesian approach gives evidence to the fact that the total number of VL cases in the state may be nearly more than three times that of currently reported figures.
Directory of Open Access Journals (Sweden)
Wirichada Pan-ngum
Full Text Available BACKGROUND: Accuracy of rapid diagnostic tests for dengue infection has been repeatedly estimated by comparing those tests with reference assays. We hypothesized that those estimates might be inaccurate if the accuracy of the reference assays is not perfect. Here, we investigated this using statistical modeling. METHODS/PRINCIPAL FINDINGS: Data from a cohort study of 549 patients suspected of dengue infection presenting at Colombo North Teaching Hospital, Ragama, Sri Lanka, that described the application of our reference assay (a combination of Dengue IgM antibody capture ELISA and IgG antibody capture ELISA and of three rapid diagnostic tests (Panbio NS1 antigen, IgM antibody and IgG antibody rapid immunochromatographic cassette tests were re-evaluated using bayesian latent class models (LCMs. The estimated sensitivity and specificity of the reference assay were 62.0% and 99.6%, respectively. Prevalence of dengue infection (24.3%, and sensitivities and specificities of the Panbio NS1 (45.9% and 97.9%, IgM (54.5% and 95.5% and IgG (62.1% and 84.5% estimated by bayesian LCMs were significantly different from those estimated by assuming that the reference assay was perfect. Sensitivity, specificity, PPV and NPV for a combination of NS1, IgM and IgG cassette tests on admission samples were 87.0%, 82.8%, 62.0% and 95.2%, respectively. CONCLUSIONS: Our reference assay is an imperfect gold standard. In our setting, the combination of NS1, IgM and IgG rapid diagnostic tests could be used on admission to rule out dengue infection with a high level of accuracy (NPV 95.2%. Further evaluation of rapid diagnostic tests for dengue infection should include the use of appropriate statistical models.
Moment-tensor solutions estimated using optimal filter theory: Global seismicity, 2001
Sipkin, S.A.; Bufe, C.G.; Zirbes, M.D.
2003-01-01
This paper is the 12th in a series published yearly containing moment-tensor solutions computed at the US Geological Survey using an algorithm based on the theory of optimal filter design (Sipkin, 1982 and Sipkin, 1986b). An inversion has been attempted for all earthquakes with a magnitude, mb or MS, of 5.5 or greater. Previous listings include solutions for earthquakes that occurred from 1981 to 2000 (Sipkin, 1986b; Sipkin and Needham, 1989, Sipkin and Needham, 1991, Sipkin and Needham, 1992, Sipkin and Needham, 1993, Sipkin and Needham, 1994a and Sipkin and Needham, 1994b; Sipkin and Zirbes, 1996 and Sipkin and Zirbes, 1997; Sipkin et al., 1998, Sipkin et al., 1999, Sipkin et al., 2000a, Sipkin et al., 2000b and Sipkin et al., 2002).The entire USGS moment-tensor catalog can be obtained via anonymous FTP at ftp://ghtftp.cr.usgs.gov. After logging on, change directory to “momten”. This directory contains two compressed ASCII files that contain the finalized solutions, “mt.lis.Z” and “fmech.lis.Z”. “mt.lis.Z” contains the elements of the moment tensors along with detailed event information; “fmech.lis.Z” contains the decompositions into the principal axes and best double-couples. The fast moment-tensor solutions for more recent events that have not yet been finalized and added to the catalog, are gathered by month in the files “jan01.lis.Z”, etc. “fmech.doc.Z” describes the various fields.
Using ancillary information to improve hypocenter estimation: Bayesian single event location (BSEL)
Energy Technology Data Exchange (ETDEWEB)
Anderson, Dale N [Los Alamos National Laboratory
2008-01-01
We have developed and tested an algorithm, Bayesian Single Event Location (BSEL), for estimating the location of a seismic event. The main driver for our research is the inadequate representation of ancillary information in the hypocenter estimation procedure. The added benefit is that we have also addressed instability issues often encountered with historical NLR solvers (e.g., non-convergence or seismically infeasible results). BSEL differs from established nonlinear regression techniques by using a Bayesian prior probability density function (prior PDF) to incorporate ancillary physical basis constraints about event location. P-wave arrival times from seismic events are used in the development. Depth, a focus of this paper, may be modeled with a prior PDF (potentially skewed) that captures physical basis bounds from surface wave observations. This PDF is constructed from a Rayleigh wave depth excitation eigenfunction that is based on the observed minimum period from a spectrogram analysis and estimated near-source elastic parameters. For example, if the surface wave is an Rg phase, it potentially provides a strong constraint for depth, which has important implications for remote monitoring of nuclear explosions. The proposed Bayesian algorithm is illustrated with events that demonstrate its congruity with established hypocenter estimation methods and its application potential. The BSEL method is applied to three events: (1) A shallow Mw 4 earthquake that occurred near Bardwell, KY on June 6, 2003, (2) the Mw 5.6 earthquake of July 26, 2005 that occurred near Dillon, MT, and (3) a deep Mw 5.7 earthquake that occurred off the coast of Japan on April 22, 1980. A strong Rg was observed from the Bardwell, KY earthquake that places very strong constraints on depth and origin time. No Rg was observed for the Dillon, MT earthquake, but we used the minimum observed period of a Rayleigh wave (7 seconds) to reduce the depth and origin time uncertainty. Because the Japan
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied.
Simultaneous estimation of noise variance and number of peaks in Bayesian spectral deconvolution
Tokuda, Satoru; Okada, Masato
2016-01-01
Heuristic identification of peaks from noisy complex spectra often leads to misunderstanding physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multi-peak spectra into single peaks statistically and is constructed in two steps. The first step is estimating both noise variance and number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multi-peak models are nonlinear and hierarchical. Our framework enables escaping from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy. We discuss a simulation demonstrating how efficient our framework is and show that estimating both noise variance and number of peaks prevents overfitting, overpenalizing, and misun...
Bayesian semiparametric power spectral density estimation in gravitational wave data analysis
Edwards, Matthew C; Christensen, Nelson
2015-01-01
The standard noise model in gravitational wave (GW) data analysis assumes detector noise is stationary and Gaussian distributed, with a known power spectral density (PSD) that is usually estimated using clean off-source data. Real GW data often depart from these assumptions, and misspecified parametric models of the PSD could result in misleading inferences. We propose a Bayesian semiparametric approach to improve this. We use a nonparametric Bernstein polynomial prior on the PSD, with weights attained via a Dirichlet process distribution, and update this using the Whittle likelihood. Posterior samples are obtained using a Metropolis-within-Gibbs sampler. We simultaneously estimate the reconstruction parameters of a rotating core collapse supernova GW burst that has been embedded in simulated Advanced LIGO noise. We also discuss an approach to deal with non-stationary data by breaking longer data streams into smaller and locally stationary components.
Off-grid Direction of Arrival Estimation Using Sparse Bayesian Inference
Yang, Zai; Zhang, Cishen
2011-01-01
This paper is focused on solving the narrowband direction of arrival estimation problem from a sparse signal reconstruction perspective. Existing sparsity-based methods have shown advantages over conventional ones but exhibit limitations in practical situations where the true directions are not in the sampling grid. A so-called off-grid model is broached to reduce the modeling error caused by the off-grid directions. An iterative algorithm is proposed in this paper to solve the resulting problem from a Bayesian perspective while joint sparsity among different snapshots is exploited by assuming the same Laplace prior. Like existing sparsity-based methods, the new approach applies to arbitrary sensor array and exhibits increased resolution and improved robustness to noise and source correlation. Moreover, our approach results in more accurate direction of arrival estimation, e.g., smaller bias and lower mean squared error. High precision can be obtained with a coarse sampling grid and, meanwhile, computational ...
Bayesian Regularization in a Neural Network Model to Estimate Lines of Code Using Function Points
Directory of Open Access Journals (Sweden)
K. K. Aggarwal
2005-01-01
Full Text Available It is a well known fact that at the beginning of any project, the software industry needs to know, how much will it cost to develop and what would be the time required ? . This paper examines the potential of using a neural network model for estimating the lines of code, once the functional requirements are known. Using the International Software Benchmarking Standards Group (ISBSG Repository Data (release 9 for the experiment, this paper examines the performance of back propagation feed forward neural network to estimate the Source Lines of Code. Multiple training algorithms are used in the experiments. Results demonstrate that the neural network models trained using Bayesian Regularization provide the best results and are suitable for this purpose.
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
Energy Technology Data Exchange (ETDEWEB)
Terwilliger, T. C.; Adams, P. D.; Read, R. J.; McCoy, A. J.; Moriarty, Nigel W.; Grosse-Kunstleve, R. W.; Afonine, P. V.; Zwart, P. H.; Hung, L.-W.
2009-03-01
Estimates of the quality of experimental maps are important in many stages of structure determination of macromolecules. Map quality is defined here as the correlation between a map and the map calculated based on a final refined model. Here we examine 10 different measures of experimental map quality using a set of 1359 maps calculated by reanalysis of 246 solved MAD, SAD, and MIR datasets. A simple Bayesian approach to estimation of map quality from one or more measures is presented. We find that a Bayesian estimator based on the skew of histograms of electron density is the most accurate of the 10 individual Bayesian estimators of map quality examined, with a correlation between estimated and actual map quality of 0.90. A combination of the skew of electron density with the local correlation of rms density gives a further improvement in estimating map quality, with an overall correlation coefficient of 0.92. The PHENIX AutoSol Wizard carries out automated structure solution based on any combination of SAD, MAD, SIR, or MIR datasets. The Wizard is based on tools from the PHENIX package and uses the Bayesian estimates of map quality described here to choose the highest-quality solutions after experimental phasing.
A Bayesian approach to estimate the biomass of anchovies off the coast of Perú.
Quiroz, Zaida C; Prates, Marcos O; Rue, Håvard
2015-03-01
The Northern Humboldt Current System (NHCS) is the world's most productive ecosystem in terms of fish. In particular, the Peruvian anchovy (Engraulis ringens) is the major prey of the main top predators, like seabirds, fish, humans, and other mammals. In this context, it is important to understand the dynamics of the anchovy distribution to preserve it as well as to exploit its economic capacities. Using the data collected by the "Instituto del Mar del Perú" (IMARPE) during a scientific survey in 2005, we present a statistical analysis that has as main goals: (i) to adapt to the characteristics of the sampled data, such as spatial dependence, high proportions of zeros and big size of samples; (ii) to provide important insights on the dynamics of the anchovy population; and (iii) to propose a model for estimation and prediction of anchovy biomass in the NHCS offshore from Perú. These data were analyzed in a Bayesian framework using the integrated nested Laplace approximation (INLA) method. Further, to select the best model and to study the predictive power of each model, we performed model comparisons and predictive checks, respectively. Finally, we carried out a Bayesian spatial influence diagnostic for the preferred model. © 2014, The International Biometric Society.
Cruz-Marcelo, Alejandro; Ensor, Katherine B.; Rosner, Gary L.
2011-01-01
The term structure of interest rates is used to price defaultable bonds and credit derivatives, as well as to infer the quality of bonds for risk management purposes. We introduce a model that jointly estimates term structures by means of a Bayesian hierarchical model with a prior probability model based on Dirichlet process mixtures. The modeling methodology borrows strength across term structures for purposes of estimation. The main advantage of our framework is its ability to produce reliable estimators at the company level even when there are only a few bonds per company. After describing the proposed model, we discuss an empirical application in which the term structure of 197 individual companies is estimated. The sample of 197 consists of 143 companies with only one or two bonds. In-sample and out-of-sample tests are used to quantify the improvement in accuracy that results from approximating the term structure of corporate bonds with estimators by company rather than by credit rating, the latter being a popular choice in the financial literature. A complete description of a Markov chain Monte Carlo (MCMC) scheme for the proposed model is available as Supplementary Material. PMID:21765566
Gui, Guan; Xu, Li; Shan, Lin; Adachi, Fumiyuki
2014-01-01
In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods.
Directory of Open Access Journals (Sweden)
Rens van de Schoot
2015-03-01
Full Text Available Background: The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods: First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis- specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results: Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion: We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.
Rotondi, R.
2009-04-01
According to the unified scaling theory the probability distribution function of the recurrence time T is a scaled version of a base function and the average value of T can be used as a scale parameter for the distribution. The base function must belong to the scale family of distributions: tested on different catalogues and for different scale levels, for Corral (2005) the (truncated) generalized gamma distribution is the best model, for German (2006) the Weibull distribution. The scaling approach should overcome the difficulty of estimating distribution functions over small areas but theorical limitations and partial instability of the estimated distributions have been pointed out in the literature. Our aim is to analyze the recurrence time of strong earthquakes that occurred in the Italian territory. To satisfy the hypotheses of independence and identical distribution we have evaluated the times between events that occurred in each area of the Database of Individual Seismogenic Sources and then we have gathered them by eight tectonically coherent regions, each of them dominated by a well characterized geodynamic process. To solve problems like: paucity of data, presence of outliers and uncertainty in the choice of the functional expression for the distribution of t, we have followed a nonparametric approach (Rotondi (2009)) in which: (a) the maximum flexibility is obtained by assuming that the probability distribution is a random function belonging to a large function space, distributed as a stochastic process; (b) nonparametric estimation method is robust when the data contain outliers; (c) Bayesian methodology allows to exploit different information sources so that the model fitting may be good also to scarce samples. We have compared the hazard rates evaluated through the parametric and nonparametric approach. References Corral A. (2005). Mixing of rescaled data and Bayesian inference for earthquake recurrence times, Nonlin. Proces. Geophys., 12, 89
Parkes, Brandon; Demeritt, David
2016-09-01
This paper describes a Bayesian statistical model for estimating flood frequency by combining uncertain annual maximum (AMAX) data from a river gauge with estimates of flood peak discharge from various historic sources that predate the period of instrument records. Such historic flood records promise to expand the time series data needed for reducing the uncertainty in return period estimates for extreme events, but the heterogeneity and uncertainty of historic records make them difficult to use alongside Flood Estimation Handbook and other standard methods for generating flood frequency curves from gauge data. Using the flow of the River Eden in Carlisle, Cumbria, UK as a case study, this paper develops a Bayesian model for combining historic flood estimates since 1800 with gauge data since 1967 to estimate the probability of low frequency flood events for the area taking account of uncertainty in the discharge estimates. Results show a reduction in 95% confidence intervals of roughly 50% for annual exceedance probabilities of less than 0.0133 (return periods over 75 years) compared to standard flood frequency estimation methods using solely systematic data. Sensitivity analysis shows the model is sensitive to 2 model parameters both of which are concerned with the historic (pre-systematic) period of the time series. This highlights the importance of adequate consideration of historic channel and floodplain changes or possible bias in estimates of historic flood discharges. The next steps required to roll out this Bayesian approach for operational flood frequency estimation at other sites is also discussed.
Shaweno, Debebe; Trauer, James M; Denholm, Justin T; McBryde, Emma S
2017-10-02
Reported tuberculosis (TB) incidence globally continues to be heavily influenced by expert opinion of case detection rates and ecological estimates of disease duration. Both approaches are recognised as having substantial variability and inaccuracy, leading to uncertainty in true TB incidence and other such derived statistics. We developed Bayesian binomial mixture geospatial models to estimate TB incidence and case detection rate (CDR) in Ethiopia. In these models the underlying true incidence was formulated as a partially observed Markovian process following a mixed Poisson distribution and the detected (observed) TB cases as a binomial distribution, conditional on CDR and true incidence. The models use notification data from multiple areas over several years and account for the existence of undetected TB cases and variability in true underlying incidence and CDR. Deviance information criteria (DIC) were used to select the best performing model. A geospatial model was the best fitting approach. This model estimated that TB incidence in Sheka Zone increased from 198 (95% Credible Interval (CrI) 187, 233) per 100,000 population in 2010 to 232 (95% CrI 212, 253) per 100,000 population in 2014. The model revealed a wide discrepancy between the estimated incidence rate and notification rate, with the estimated incidence ranging from 1.4 (in 2014) to 1.7 (in 2010) times the notification rate (CDR of 71% and 60% respectively). Population density and TB incidence in neighbouring locations (spatial lag) predicted the underlying TB incidence, while health facility availability predicted higher CDR. Our model estimated trends in underlying TB incidence while accounting for undetected cases and revealed significant discrepancies between incidence and notification rates in rural Ethiopia. This approach provides an alternative approach to estimating incidence, entirely independent of the methods involved in current estimates and is feasible to perform from routinely collected
Cameriere, R; Pacifici, A; Pacifici, L; Polimeni, A; Federici, F; Cingolani, M; Ferrante, L
2016-01-01
Age estimation from teeth by radiological analysis, in both children and adolescents, has wide applications in several scientific and forensic fields. In 2006, Cameriere et al. proposed a regression method to estimate chronological age in children, according to measurements of open apices of permanent teeth. Although several regression models are used to analyze the relationship between age and dental development, one serious limitation is the unavoidable bias in age estimation when regression models are used. The aim of this paper is to develop a full Bayesian calibration method for age estimation in children according to the sum of open apices, S, of the seven left permanent mandibular teeth. This cross-sectional study included 2630 orthopantomographs (OPGs) from healthy living Italian subjects, aged between 4 and 17 years and with no obvious developmental abnormalities. All radiographs were in digital format and were processed by the ImageJ computer-aided drawing program. The distance between the inner side of the open apex was measured for each tooth. Dental maturity was then evaluated according to the sum of normalized open apices (S). Intra- and inter-observer agreement was satisfactory, according to an intra-class correlation coefficient of S on 50 randomly selected OPGs. Mean absolute errors were 0.72 years (standard deviation 0.60) and 0.73 years (standard deviation 0.61) in boys and girls, respectively. The mean interquartile range (MIQR) of the calibrating distribution was 1.37 years (standard deviation 0.46) and 1.51 years (standard deviation 0.52) in boys and girls, respectively. Estimate bias was βERR=-0.005 and 0.003 for boys and girls, corresponding to a bias of a few days for all individuals in the sample. Neither of the βERR values was significantly different from 0 (p>0.682). In conclusion, the Bayesian calibration method overcomes problems of bias in age estimation when regression models are used, and appears to be suitable for assessing both
A Bayesian approach to age estimation in modern Americans from the clavicle.
Langley-Shirley, Natalie; Jantz, Richard L
2010-05-01
Clavicles from 1289 individuals from cohorts spanning the 20th century were scored with two scoring systems. Transition analysis and Bayesian statistics were used to obtain robust age ranges that are less sensitive to the effects of age mimicry and developmental outliers than age ranges obtained using a percentile approach. Observer error tests showed that a simple three-phase scoring system proved the least subjective, while retaining accuracy levels. Additionally, significant sexual dimorphism was detected in the onset of fusion, with women commencing fusion at least a year earlier than men (women transition to fusion at approximately 15 years of age and men at 16 years). Significant secular trends were apparent in the onset of skeletal maturation, with modern Americans transitioning to fusion approximately 4 years earlier than early 20th century Americans and 3.5 years earlier than Korean War era Americans. These results underscore the importance of using modern standards to estimate age in modern individuals.
Convergent Bayesian formulations of blind source separation and electromagnetic source estimation
Knuth, Kevin H
2015-01-01
We consider two areas of research that have been developing in parallel over the last decade: blind source separation (BSS) and electromagnetic source estimation (ESE). BSS deals with the recovery of source signals when only mixtures of signals can be obtained from an array of detectors and the only prior knowledge consists of some information about the nature of the source signals. On the other hand, ESE utilizes knowledge of the electromagnetic forward problem to assign source signals to their respective generators, while information about the signals themselves is typically ignored. We demonstrate that these two techniques can be derived from the same starting point using the Bayesian formalism. This suggests a means by which new algorithms can be developed that utilize as much relevant information as possible. We also briefly mention some preliminary work that supports the value of integrating information used by these two techniques and review the kinds of information that may be useful in addressing the...
DEFF Research Database (Denmark)
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context...... of multiple genetic markers measured in multiple studies, based on the analysis of individual participant data. First, for a single genetic marker in one study, we show that the usual ratio of coefficients approach can be reformulated as a regression with heterogeneous error in the explanatory variable....... This can be implemented using a Bayesian approach, which is next extended to include multiple genetic markers. We then propose a hierarchical model for undertaking a meta-analysis of multiple studies, in which it is not necessary that the same genetic markers are measured in each study. This provides...
Bayesian joint estimation of non-Gaussianity and the power spectrum
Rocha, G; Hobson, M P; Lasenby, A; Rocha, Graca; Magueijo, Joao; Hobson, Mike; Lasenby, Anthony
2001-01-01
We propose a rigorous, non-perturbative, Bayesian framework which enables one jointly to test Gaussianity and estimate the power spectrum of CMB anisotropies. It makes use of the Hilbert space of an harmonic oscillator to set up an exact likelihood function, dependent on the power spectrum and on a set of parameters $\\alpha_i$, which are zero for Gaussian processes. The latter can be expressed as series of cumulants; indeed they perturbatively reduce to cumulants. However they have the advantage that their variation is essentially unconstrained. Any truncation(i.e.: finite set of $\\alpha_i$) therefore still produces a proper distribution - something which cannot be said of the only other such tool on offer, the Edgeworth expansion. We apply our method to Very Small Array (VSA) simulations based on signal Gaussianity, showing that our algorithm is indeed not biased.
Directory of Open Access Journals (Sweden)
Auvinen Petri
2008-01-01
Full Text Available We propose a method for improving the quality of signal from DNA microarrays by using several scans at varying scanner sen-sitivities. A Bayesian latent intensity model is introduced for the analysis of such data. The method improves the accuracy at which expressions can be measured in all ranges and extends the dynamic range of measured gene expression at the high end. Our method is generic and can be applied to data from any organism, for imaging with any scanner that allows varying the laser power, and for extraction with any image analysis software. Results from a self-self hybridization data set illustrate an improved precision in the estimation of the expression of genes compared to what can be achieved by applying standard methods and using only a single scan.
Prince, Debra A; Konigsberg, Lyle W
2008-05-01
The present study analyzed apical translucency and periodontal recession on single-rooted teeth in order to generate age-at-death estimations using two inverse calibration methods and one Bayesian method. The three age estimates were compared to highlight inherent problems with the inverse calibration methods. The results showed that the Bayesian analysis reduced severity of several problems associated with adult skeletal age-at-death estimations. The Bayesian estimates produced a lower overall mean error, a higher correlation with actual age, reduced aging bias, reduced age mimicry, and reduced the age ranges associated with the most probable age as compared to the inverse calibration methods for this sample. This research concluded that periodontal recession cannot be used as a univariate age indicator, due to its low correlation with chronological age. Apical translucency yielded a high correlation with chronological age and was concluded to be an important age indicator. The Bayesian approach offered the most appropriate statistical analysis for the estimation of age-at-death with the current sample.
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
Lennox, Kristin P; Dahl, David B; Vannucci, Marina; Tsai, Jerry W
2009-06-01
Interest in predicting protein backbone conformational angles has prompted the development of modeling and inference procedures for bivariate angular distributions. We present a Bayesian approach to density estimation for bivariate angular data that uses a Dirichlet process mixture model and a bivariate von Mises distribution. We derive the necessary full conditional distributions to fit the model, as well as the details for sampling from the posterior predictive distribution. We show how our density estimation method makes it possible to improve current approaches for protein structure prediction by comparing the performance of the so-called "whole" and "half" position distributions. Current methods in the field are based on whole position distributions, as density estimation for the half positions requires techniques, such as ours, that can provide good estimates for small datasets. With our method we are able to demonstrate that half position data provides a better approximation for the distribution of conformational angles at a given sequence position, therefore providing increased efficiency and accuracy in structure prediction.
Hawes, Matthew; Mihaylova, Lyudmila; Septier, Francois; Godsill, Simon
2017-03-01
The problem of estimating the dynamic direction of arrival of far field signals impinging on a uniform linear array, with mutual coupling effects, is addressed. This work proposes two novel approaches able to provide accurate solutions, including at the endfire regions of the array. Firstly, a Bayesian compressive sensing Kalman filter is developed, which accounts for the predicted estimated signals rather than using the traditional sparse prior. The posterior probability density function of the received source signals and the expression for the related marginal likelihood function are derived theoretically. Next, a Gibbs sampling based approach with indicator variables in the sparsity prior is developed. This allows sparsity to be explicitly enforced in different ways, including when an angle is too far from the previous estimate. The proposed approaches are validated and evaluated over different test scenarios and compared to the traditional relevance vector machine based method. An improved accuracy in terms of average root mean square error values is achieved (up to 73.39% for the modified relevance vector machine based approach and 86.36% for the Gibbs sampling based approach). The proposed approaches prove to be particularly useful for direction of arrival estimation when the angle of arrival moves into the endfire region of the array.
Bayesian Mass Estimates of the Milky Way II: The dark and light sides of parameter assumptions
Eadie, Gwendolyn M
2016-01-01
We present mass and mass profile estimates for the Milky Way Galaxy using the Bayesian analysis developed by Eadie et al (2015b) and using globular clusters (GCs) as tracers of the Galactic potential. The dark matter and GCs are assumed to follow different spatial distributions; we assume power-law model profiles and use the model distribution functions described in Evans et al. (1997); Deason et al (2011, 2012a). We explore the relationships between assumptions about model parameters and how these assumptions affect mass profile estimates. We also explore how using subsamples of the GC population beyond certain radii affect mass estimates. After exploring the posterior distributions of different parameter assumption scenarios, we conclude that a conservative estimate of the Galaxy's mass within 125kpc is $5.22\\times10^{11} M_{\\odot}$, with a $50\\%$ probability region of $(4.79, 5.63) \\times10^{11} M_{\\odot}$. Extrapolating out to the virial radius, we obtain a virial mass for the Milky Way of $6.82\\times10^{...
Sánchez Gil, M. Carmen; Berihuete, Angel; Alfaro, Emilio J.; Pérez, Enrique; Sarro, Luis M.
2015-09-01
One of the fundamental goals of modern Astronomy is to estimate the physical parameters of galaxies from images in different spectral bands. We present a hierarchical Bayesian model for obtaining age maps from images in the Ha line (taken with Taurus Tunable Filter (TTF)), ultraviolet band (far UV or FUV, from GALEX) and infrared bands (24, 70 and 160 microns (μm), from Spitzer). As shown in [1], we present the burst ages for young stellar populations in the nearby and nearly face on galaxy M74. As it is shown in the previous work, the Hα to FUV flux ratio gives a good relative indicator of very recent star formation history (SFH). As a nascent star-forming region evolves, the Ha line emission declines earlier than the UV continuum, leading to a decrease in the HαFUV ratio. Through a specific star-forming galaxy model (Starburst 99, SB99), we can obtain the corresponding theoretical ratio Hα / FUV to compare with our observed flux ratios, and thus to estimate the ages of the observed regions. Due to the nature of the problem, it is necessary to propose a model of high complexity to take into account the mean uncertainties, and the interrelationship between parameters when the Hα / FUV flux ratio mentioned above is obtained. To address the complexity of the model, we propose a Bayesian hierarchical model, where a joint probability distribution is defined to determine the parameters (age, metallicity, IMF), from the observed data, in this case the observed flux ratios Hα / FUV. The joint distribution of the parameters is described through an i.i.d. (independent and identically distributed random variables), generated through MCMC (Markov Chain Monte Carlo) techniques.
Roy, Vivekananda; Evangelou, Evangelos; Zhu, Zhengyuan
2016-03-01
Spatial generalized linear mixed models (SGLMMs) are popular models for spatial data with a non-Gaussian response. Binomial SGLMMs with logit or probit link functions are often used to model spatially dependent binomial random variables. It is known that for independent binomial data, the robit regression model provides a more robust (against extreme observations) alternative to the more popular logistic and probit models. In this article, we introduce a Bayesian spatial robit model for spatially dependent binomial data. Since constructing a meaningful prior on the link function parameter as well as the spatial correlation parameters in SGLMMs is difficult, we propose an empirical Bayes (EB) approach for the estimation of these parameters as well as for the prediction of the random effects. The EB methodology is implemented by efficient importance sampling methods based on Markov chain Monte Carlo (MCMC) algorithms. Our simulation study shows that the robit model is robust against model misspecification, and our EB method results in estimates with less bias than full Bayesian (FB) analysis. The methodology is applied to a Celastrus Orbiculatus data, and a Rhizoctonia root data. For the former, which is known to contain outlying observations, the robit model is shown to do better for predicting the spatial distribution of an invasive species. For the latter, our approach is doing as well as the classical models for predicting the disease severity for a root disease, as the probit link is shown to be appropriate. Though this article is written for Binomial SGLMMs for brevity, the EB methodology is more general and can be applied to other types of SGLMMs. In the accompanying R package geoBayes, implementations for other SGLMMs such as Poisson and Gamma SGLMMs are provided.
Carreno, Joseph J; Lomaestro, Ben; Tietjan, John; Lodise, Thomas P
2017-03-13
This study evaluated the predictive performance of a Bayesian PK estimation method (ADAPT V) to estimate the 24-hour vancomycin area under the curve estimation (AUC) with limited PK sampling in adult obese patients receiving vancomycin for suspected or confirmed Gram-positive infections. This was an IRB-approved prospective evaluation of 12 patients. Patients had a median (95% CI) age of 61 years (39 - 71), creatinine clearance of 86 mL/min (75 - 120), and body mass index of 45 kg/m(2) (40 - 52). For each patient, five PK concentrations were measured and 4 different vancomycin population PK models were used as Bayesian priors to estimate the estimate vancomycin AUC (AUCFULL). Using each PK model as a prior, data-depleted PK subsets were used to estimate the 24-hour AUC (i.e. peak and trough data [AUCPT], midpoint and trough data [AUCMT], and trough only data [AUCT]). The 24-hour AUC derived from the full data set (AUCFULL) was compared to AUC derived from data depleted subsets (AUCPT, AUCMT, AUCT) for each model. For the 4 sets of analyses, AUCFULL estimates ranged from 437 to 489 mg-h/L. The AUCPT provided the best approximation of the AUCFULL; AUCMT and AUCT tended to overestimate AUCFULL Further prospective studies are needed to evaluate the impact of AUC monitoring in clinical practice but the findings from this study suggest the vancomycin AUC can be estimated good precision and accuracy with limited PK sampling using Bayesian PK estimation software.
Estimations of Pareto-eigenvalues for Higher-order Tensors%高阶张量Pareto-特征值的估计
Institute of Scientific and Technical Information of China (English)
徐凤; 凌晨
2015-01-01
Eigenvalue complementarity problems of tensors have many practical applications, and have closely connection with high order homogeneous polynomial optimization which is NP-hard.In this paper,some estimation proprieties of Pareto-eigenvalues of high order tensors are studied.We also prove that all the Pareto-eigenvalues of symmetric strong M-tensor and monotone tensor are positive.%张量特征值互补问题有许多实际应用,它与一类高次齐次多项式优化关系密切,而后者是NP-难问题. 给出了高阶张量Pareto-特征值的若干估计性质,并证明了对称的强-M-张量及单调张量的Pareto-特征值均为正.
Bayesian estimation of false-negative rate in a clinical trial of sentinel node biopsy.
Newcombe, Robert G
2007-08-15
Estimating the false-negative rate is a major issue in evaluating sentinel node biopsy (SNB) for staging cancer. In a large multicentre trial of SNB for intra-operative staging of clinically node-negative breast cancer, two sources of information on the false-negative rate are available.Direct information is available from a preliminary validation phase: all patients underwent SNB followed by axillary nodal clearance or sampling. Of 803 patients with successful sentinel node localization, 19 (2.4 per cent) were classed as false negatives. Indirect information is also available from the randomized phase. Ninety-seven (25.4 per cent) of 382 control patients undergoing axillary clearance had positive axillae. In the experimental group, 94/366 (25.7 per cent) were apparently node positive. Taking a simple difference of these proportions gives a point estimate of -0.3 per cent for the proportion of patients who had positive axillae but were missed by SNB. This estimate is clearly inadmissible. In this situation, a Bayesian analysis yields interpretable point and interval estimates. We consider the single proportion estimate from the validation phase; the difference between independent proportions from the randomized phase, both unconstrained and constrained to non-negativity; and combined information from the two parts of the study. As well as tail-based and highest posterior density interval estimates, we examine three obvious point estimates, the posterior mean, median and mode. Posterior means and medians are similar for the validation and randomized phases separately and combined, all between 2 and 3 per cent, indicating similarity rather than conflict between the two data sources.
Grecu, Mircea; Olson, William S.
2006-01-01
Precipitation estimation from satellite passive microwave radiometer observations is a problem that does not have a unique solution that is insensitive to errors in the input data. Traditionally, to make this problem well posed, a priori information derived from physical models or independent, high-quality observations is incorporated into the solution. In the present study, a database of precipitation profiles and associated brightness temperatures is constructed to serve as a priori information in a passive microwave radiometer algorithm. The precipitation profiles are derived from a Tropical Rainfall Measuring Mission (TRMM) combined radar radiometer algorithm, and the brightness temperatures are TRMM Microwave Imager (TMI) observed. Because the observed brightness temperatures are consistent with those derived from a radiative transfer model embedded in the combined algorithm, the precipitation brightness temperature database is considered to be physically consistent. The database examined here is derived from the analysis of a month-long record of TRMM data that yields more than a million profiles of precipitation and associated brightness temperatures. These profiles are clustered into a tractable number of classes based on the local sea surface temperature, a radiometer-based estimate of the echo-top height (the height beyond which the reflectivity drops below 17 dBZ), and brightness temperature principal components. For each class, the mean precipitation profile, brightness temperature principal components, and probability of occurrence are determined. The precipitation brightness temperature database supports a radiometer-only algorithm that incorporates a Bayesian estimation methodology. In the Bayesian framework, precipitation estimates are weighted averages of the mean precipitation values corresponding to the classes in the database, with the weights being determined according to the similarity between the observed brightness temperature principal
Naganathan, Athi N; Perez-Jimenez, Raul; Muñoz, Victor; Sanchez-Ruiz, Jose M
2011-10-14
The realization that folding free energy barriers can be small enough to result in significant population of the species at the barrier top has sprouted in several methods to estimate folding barriers from equilibrium experiments. Some of these approaches are based on fitting the experimental thermogram measured by differential scanning calorimetry (DSC) to a one-dimensional representation of the folding free-energy surface (FES). Different physical models have been used to represent the FES: (1) a Landau quartic polynomial as a function of the total enthalpy, which acts as an order parameter; (2) the projection onto a structural order parameter (i.e. number of native residues or native contacts) of the free energy of all the conformations generated by Ising-like statistical mechanical models; and (3) mean-field models that define conformational entropy and stabilization energy as functions of a continuous local order parameter. The fundamental question that emerges is how can we obtain robust, model-independent estimates of the thermodynamic folding barrier from the analysis of DSC experiments. Here we address this issue by comparing the performance of various FES models in interpreting the thermogram of a protein with a marginal folding barrier. We chose the small α-helical protein PDD, which folds-unfolds in microseconds crossing a free energy barrier previously estimated as ~1 RT. The fits of the PDD thermogram to the various models and assumptions produce FES with a consistently small free energy barrier separating the folded and unfolded ensembles. However, the fits vary in quality as well as in the estimated barrier. Applying Bayesian probabilistic analysis we rank the fit performance using a statistically rigorous criterion that leads to a global estimate of the folding barrier and its precision, which for PDD is 1.3 ± 0.4 kJ mol(-1). This result confirms that PDD folds over a minor barrier consistent with the downhill folding regime. We have further
Gugushvili, S.; Spreij, P.
2016-01-01
We consider the problem of non-parametric estimation of the deterministic dispersion coefficient of a linear stochastic differential equation based on discrete time observations on its solution. We take a Bayesian approach to the problem and under suitable regularity assumptions derive the posteror
Bayesian Estimation of Fugitive Methane Point Source Emission Rates from a Single Downwind High-Frequency Gas Sensor With the tremendous advances in onshore oil and gas exploration and production (E&P) capability comes the realization that new tools are needed to support env...
Bayesian Mass Estimates of the Milky Way: The Dark and Light Sides of Parameter Assumptions
Eadie, Gwendolyn M.; Harris, William E.
2016-10-01
We present mass and mass profile estimates for the Milky Way (MW) Galaxy using the Bayesian analysis developed by Eadie et al. and using globular clusters (GCs) as tracers of the Galactic potential. The dark matter and GCs are assumed to follow different spatial distributions; we assume power-law model profiles and use the model distribution functions described in Evans et al. and Deason et al. We explore the relationships between assumptions about model parameters and how these assumptions affect mass profile estimates. We also explore how using subsamples of the GC population beyond certain radii affect mass estimates. After exploring the posterior distributions of different parameter assumption scenarios, we conclude that a conservative estimate of the Galaxy’s mass within 125 kpc is 5.22× {10}11 {M}⊙ , with a 50% probability region of (4.79,5.63)× {10}11 {M}⊙ . Extrapolating out to the virial radius, we obtain a virial mass for the MW of 6.82× {10}11 {M}⊙ with 50% credible region of (6.06,7.53)× {10}11 {M}⊙ ({r}{vir}={185}-7+7 {{kpc}}). If we consider only the GCs beyond 10 kpc, then the virial mass is 9.02 (5.69,10.86)× {10}11 {M}⊙ ({r}{vir}={198}-24+19 kpc). We also arrive at an estimate of the velocity anisotropy parameter β of the GC population, which is β =0.28 with a 50% credible region (0.21, 0.35). Interestingly, the mass estimates are sensitive to both the dark matter halo potential and visible matter tracer parameters, but are not very sensitive to the anisotropy parameter.
Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming
2011-06-01
Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.
Jat, Prahlad; Serre, Marc L
2016-12-01
Widespread contamination of surface water chloride is an emerging environmental concern. Consequently accurate and cost-effective methods are needed to estimate chloride along all river miles of potentially contaminated watersheds. Here we introduce a Bayesian Maximum Entropy (BME) space/time geostatistical estimation framework that uses river distances, and we compare it with Euclidean BME to estimate surface water chloride from 2005 to 2014 in the Gunpowder-Patapsco, Severn, and Patuxent subbasins in Maryland. River BME improves the cross-validation R(2) by 23.67% over Euclidean BME, and river BME maps are significantly different than Euclidean BME maps, indicating that it is important to use river BME maps to assess water quality impairment. The river BME maps of chloride concentration show wide contamination throughout Baltimore and Columbia-Ellicott cities, the disappearance of a clean buffer separating these two large urban areas, and the emergence of multiple localized pockets of contamination in surrounding areas. The number of impaired river miles increased by 0.55% per year in 2005-2009 and by 1.23% per year in 2011-2014, corresponding to a marked acceleration of the rate of impairment. Our results support the need for control measures and increased monitoring of unassessed river miles.
Online estimation of lithium-ion battery capacity using sparse Bayesian learning
Hu, Chao; Jain, Gaurav; Schmidt, Craig; Strief, Carrie; Sullivan, Melani
2015-09-01
Lithium-ion (Li-ion) rechargeable batteries are used as one of the major energy storage components for implantable medical devices. Reliability of Li-ion batteries used in these devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, patients and physicians. To ensure a Li-ion battery operates reliably, it is important to develop health monitoring techniques that accurately estimate the capacity of the battery throughout its life-time. This paper presents a sparse Bayesian learning method that utilizes the charge voltage and current measurements to estimate the capacity of a Li-ion battery used in an implantable medical device. Relevance Vector Machine (RVM) is employed as a probabilistic kernel regression method to learn the complex dependency of the battery capacity on the characteristic features that are extracted from the charge voltage and current measurements. Owing to the sparsity property of RVM, the proposed method generates a reduced-scale regression model that consumes only a small fraction of the CPU time required by a full-scale model, which makes online capacity estimation computationally efficient. 10 years' continuous cycling data and post-explant cycling data obtained from Li-ion prismatic cells are used to verify the performance of the proposed method.
Tanaka, Mark M; Francis, Andrew R; Luciani, Fabio; Sisson, S A
2006-07-01
Tuberculosis can be studied at the population level by genotyping strains of Mycobacterium tuberculosis isolated from patients. We use an approximate Bayesian computational method in combination with a stochastic model of tuberculosis transmission and mutation of a molecular marker to estimate the net transmission rate, the doubling time, and the reproductive value of the pathogen. This method is applied to a published data set from San Francisco of tuberculosis genotypes based on the marker IS6110. The mutation rate of this marker has previously been studied, and we use those estimates to form a prior distribution of mutation rates in the inference procedure. The posterior point estimates of the key parameters of interest for these data are as follows: net transmission rate, 0.69/year [95% credibility interval (C.I.) 0.38, 1.08]; doubling time, 1.08 years (95% C.I. 0.64, 1.82); and reproductive value 3.4 (95% C.I. 1.4, 79.7). These figures suggest a rapidly spreading epidemic, consistent with observations of the resurgence of tuberculosis in the United States in the 1980s and 1990s.
Simultaneous Estimation of Noise Variance and Number of Peaks in Bayesian Spectral Deconvolution
Tokuda, Satoru; Nagata, Kenji; Okada, Masato
2017-02-01
The heuristic identification of peaks from noisy complex spectra often leads to misunderstanding of the physical and chemical properties of matter. In this paper, we propose a framework based on Bayesian inference, which enables us to separate multipeak spectra into single peaks statistically and consists of two steps. The first step is estimating both the noise variance and the number of peaks as hyperparameters based on Bayes free energy, which generally is not analytically tractable. The second step is fitting the parameters of each peak function to the given spectrum by calculating the posterior density, which has a problem of local minima and saddles since multipeak models are nonlinear and hierarchical. Our framework enables the escape from local minima or saddles by using the exchange Monte Carlo method and calculates Bayes free energy via the multiple histogram method. We discuss a simulation demonstrating how efficient our framework is and show that estimating both the noise variance and the number of peaks prevents overfitting, overpenalizing, and misunderstanding the precision of parameter estimation.
Astraatmadja, Tri L
2016-01-01
Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalogue will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalogue of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars--either in distance or in space density---give poor results: The root mean square fractional distance error, f_RMS, grows far in excess of 100% once the fractional parallax error, f_true, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single scale length parameter has been set to an appropriate value: f_RMS is roughly equal to f_true for f_true < 0.4, yet does not in...
Directory of Open Access Journals (Sweden)
Borna Müller
Full Text Available BACKGROUND: Bovine tuberculosis (BTB today primarily affects developing countries. In Africa, the disease is present essentially on the whole continent; however, little accurate information on its distribution and prevalence is available. Also, attempts to evaluate diagnostic tests for BTB in naturally infected cattle are scarce and mostly complicated by the absence of knowledge of the true disease status of the tested animals. However, diagnostic test evaluation in a given setting is a prerequisite for the implementation of local surveillance schemes and control measures. METHODOLOGY/PRINCIPAL FINDINGS: We subjected a slaughterhouse population of 954 Chadian cattle to single intra-dermal comparative cervical tuberculin (SICCT testing and two recently developed fluorescence polarization assays (FPA. Using a Bayesian modeling approach we computed the receiver operating characteristic (ROC curve of each diagnostic test, the true disease prevalence in the sampled population and the disease status of all sampled animals in the absence of knowledge of the true disease status of the sampled animals. In our Chadian setting, SICCT performed better if the cut-off for positive test interpretation was lowered from >4 mm (OIE standard cut-off to >2 mm. Using this cut-off, SICCT showed a sensitivity and specificity of 66% and 89%, respectively. Both FPA tests showed sensitivities below 50% but specificities above 90%. The true disease prevalence was estimated at 8%. Altogether, 11% of the sampled animals showed gross visible tuberculous lesions. However, modeling of the BTB disease status of the sampled animals indicated that 72% of the suspected tuberculosis lesions detected during standard meat inspections were due to other pathogens than Mycobacterium bovis. CONCLUSIONS/SIGNIFICANCE: Our results have important implications for BTB diagnosis in a high incidence sub-Saharan African setting and demonstrate the practicability of our Bayesian approach for
Energy Technology Data Exchange (ETDEWEB)
Sasaki, Makoto; Kudo, Kohsuke; Uwano, Ikuko; Goodwin, Jonathan; Higuchi, Satomi; Ito, Kenji; Yamashita, Fumio [Iwate Medical University, Division of Ultrahigh Field MRI, Institute for Biomedical Sciences, Yahaba (Japan); Boutelier, Timothe; Pautot, Fabrice [Olea Medical, Department of Research and Innovation, La Ciotat (France); Christensen, Soren [University of Melbourne, Department of Neurology and Radiology, Royal Melbourne Hospital, Victoria (Australia)
2013-10-15
A new deconvolution algorithm, the Bayesian estimation algorithm, was reported to improve the precision of parametric maps created using perfusion computed tomography. However, it remains unclear whether quantitative values generated by this method are more accurate than those generated using optimized deconvolution algorithms of other software packages. Hence, we compared the accuracy of the Bayesian and deconvolution algorithms by using a digital phantom. The digital phantom data, in which concentration-time curves reflecting various known values for cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer delays were embedded, were analyzed using the Bayesian estimation algorithm as well as delay-insensitive singular value decomposition (SVD) algorithms of two software packages that were the best benchmarks in a previous cross-validation study. Correlation and agreement of quantitative values of these algorithms with true values were examined. CBF, CBV, and MTT values estimated by all the algorithms showed strong correlations with the true values (r = 0.91-0.92, 0.97-0.99, and 0.91-0.96, respectively). In addition, the values generated by the Bayesian estimation algorithm for all of these parameters showed good agreement with the true values [intraclass correlation coefficient (ICC) = 0.90, 0.99, and 0.96, respectively], while MTT values from the SVD algorithms were suboptimal (ICC = 0.81-0.82). Quantitative analysis using a digital phantom revealed that the Bayesian estimation algorithm yielded CBF, CBV, and MTT maps strongly correlated with the true values and MTT maps with better agreement than those produced by delay-insensitive SVD algorithms. (orig.)
A Bayesian beta distribution model for estimating rainfall IDF curves in a changing climate
Lima, Carlos H. R.; Kwon, Hyun-Han; Kim, Jin-Young
2016-09-01
The estimation of intensity-duration-frequency (IDF) curves for rainfall data comprises a classical task in hydrology studies to support a variety of water resources projects, including urban drainage and the design of flood control structures. In a changing climate, however, traditional approaches based on historical records of rainfall and on the stationary assumption can be inadequate and lead to poor estimates of rainfall intensity quantiles. Climate change scenarios built on General Circulation Models offer a way to access and estimate future changes in spatial and temporal rainfall patterns at the daily scale at the utmost, which is not as fine temporal resolution as required (e.g. hours) to directly estimate IDF curves. In this paper we propose a novel methodology based on a four-parameter beta distribution to estimate IDF curves conditioned on the observed (or simulated) daily rainfall, which becomes the time-varying upper bound of the updated nonstationary beta distribution. The inference is conducted in a Bayesian framework that provides a better way to take into account the uncertainty in the model parameters when building the IDF curves. The proposed model is tested using rainfall data from four stations located in South Korea and projected climate change Representative Concentration Pathways (RCPs) scenarios 6 and 8.5 from the Met Office Hadley Centre HadGEM3-RA model. The results show that the developed model fits the historical data as good as the traditional Generalized Extreme Value (GEV) distribution but is able to produce future IDF curves that significantly differ from the historically based IDF curves. The proposed model predicts for the stations and RCPs scenarios analysed in this work an increase in the intensity of extreme rainfalls of short duration with long return periods.
Chen, Jinsong; Hubbard, Susan S.; Williams, Kenneth H.; Pride, Steve; Li, Li; Steefel, Carl; Slater, Lee
2009-08-01
We develop a state-space Bayesian framework to combine time-lapse geophysical data with other types of information for quantitative estimation of biogeochemical parameters during bioremediation. We consider characteristics of end products of biogeochemical transformations as state vectors, which evolve under constraints of local environments through evolution equations, and consider time-lapse geophysical data as available observations, which could be linked to the state vectors through petrophysical models. We estimate the state vectors and their associated unknown parameters over time using Markov chain Monte Carlo sampling methods. To demonstrate the use of the state-space approach, we apply it to complex resistivity data collected during laboratory column biostimulation experiments that were poised to precipitate iron and zinc sulfides during sulfate reduction. We develop a petrophysical model based on sphere-shaped cells to link the sulfide precipitate properties to the time-lapse geophysical attributes and estimate volume fraction of the sulfide precipitates, fraction of the dispersed, sulfide-encrusted cells, mean radius of the aggregated clusters, and permeability over the course of the experiments. Results of the case study suggest that the developed state-space approach permits the use of geophysical data sets for providing quantitative estimates of end-product characteristics and hydrological feedbacks associated with biogeochemical transformations. Although tested here on laboratory column experiment data sets, the developed framework provides the foundation needed for quantitative field-scale estimation of biogeochemical parameters over space and time using direct, but often sparse wellbore data with indirect, but more spatially extensive geophysical data sets.
Estimates of European emissions of methyl chloroform using a Bayesian inversion method
Directory of Open Access Journals (Sweden)
M. Maione
2014-03-01
Full Text Available Methyl chloroform (MCF is a man-made chlorinated solvent contributing to the destruction of stratospheric ozone and is controlled under the Montreal Protocol on Substances that Deplete the Ozone Layer. Long-term, high-frequency observations of MCF carried out at three European sites show a constant decline of the background mixing ratios of MCF. However, we observe persistent non-negligible mixing ratio enhancements of MCF in pollution episodes suggesting unexpectedly high ongoing emissions in Europe. In order to identify the source regions and to give an estimate of the magnitude of such emissions, we have used a Bayesian inversion method and a point source analysis, based on high-frequency long-term observations at the three European sites. The inversion identified south-eastern France (SEF as a region with enhanced MCF emissions. This estimate was confirmed by the point source analysis. We performed this analysis using an eleven-year data set, from January 2002 to December 2012. Overall emissions estimated for the European study domain decreased nearly exponentially from 1.1 Gg yr−1 in 2002 to 0.32 Gg yr−1 in 2012, of which the estimated emissions from the SEF region accounted for 0.49 Gg yr−1 in 2002 and 0.20 Gg yr−1 in 2012. The European estimates are a significant fraction of the total semi-hemisphere (30–90° N emissions, contributing a minimum of 9.8% in 2004 and a maximum of 33.7% in 2011, of which on average 50% are from the SEF region. On the global scale, the SEF region is thus responsible from a minimum of 2.6% (in 2003 to a maximum of 10.3% (in 2009 of the global MCF emissions.
Demir, M. T.; Copty, N. K.; Trinchero, P.; Sanchez-Vila, X.
2013-12-01
Groundwater flow and contaminant transport are strongly influenced by the spatial variability of subsurface flow parameters. However, the interpretation of pumping test data used for subsurface characterization is normally performed using conventional methods that are based on the assumption of aquifer homogeneity. In recent years, hydraulic tomography has been proposed by some researchers to address the limitations of conventional site characterization methods. Hydraulic tomography involves the sequential pumping at one of a series of wells and observing the drawdown due to pumping at adjacent wells. The interpretation of the drawdown data from hydraulic tomography has been mostly performed using formal inverse procedures for the estimation of the spatial variability of the flow parameters. The purpose of this study is to develop a method for the estimation of the statistical spatial structure of the transmissivity from hydraulic tomography data. The method relies on the pumping test interpretation procedure of Copty et al. (2011), which uses the time-drawdown data and its time derivative at each observation well to estimate the spatially averaged transmissivity as a function of radial distance from the pumping well. A Bayesian approach is then used to identify the statistical parameters of the transmissivity field (i.e. variance and integral scale). The approach compares the estimated transmissivity as a function of radial distance from the pumping well to the probability density function of the spatially-averaged transmissivity. The method is evaluated using synthetically-generated pumping test data for a range of input parameters. This application demonstrates that, through a relatively simple procedure, additional information of the spatial structure of the transmissivity may be inferred from pumping tests data. Results indicate that as the number of available pumping tests increases, the reliability of the estimated transmissivity statistical parameters also
Bayesian data analysis: estimating the efficacy of T'ai Chi as a case study.
Carpenter, Jacque; Gajewski, Byron; Teel, Cynthia; Aaronson, Lauren S
2008-01-01
Bayesian inference provides a formal framework for updating knowledge by combining prior knowledge with current data. Over the past 10 years, the Bayesian paradigm has become a popular analytic tool in health research. Although the nursing literature contains examples of Bayes' theorem applications to clinical decision making, it lacks an adequate introduction to Bayesian data analysis. Bayesian data analysis is introduced through a fully Bayesian model for determining the efficacy of tai chi as an illustrative example. The mechanics of using Bayesian models to combine prior knowledge, or data from previous studies, with observed data from a current study are discussed. The primary outcome in the illustrative example was physical function. Three prior probability distributions (priors) were generated for physical function using data from a similar study found in the literature. Each prior was combined with the likelihood from observed data in the current study to obtain a posterior probability distribution. In each case, the posterior distribution showed that the probability that the control group is better than the tai chi treatment group was low. Bayesian analysis is a valid technique that allows the researcher to manage varying amounts of data appropriately. As advancements in computer software continue, Bayesian techniques will become more accessible. Researchers must educate themselves on applications for Bayesian inference, as well as its methods and implications for future research.
Liepe, Juliane; Kirk, Paul; Filippi, Sarah; Toni, Tina; Barnes, Chris P.; Stumpf, Michael P.H.
2016-01-01
As modeling becomes a more widespread practice in the life- and biomedical sciences, we require reliable tools to calibrate models against ever more complex and detailed data. Here we present an approximate Bayesian computation framework and software environment, ABC-SysBio, which enables parameter estimation and model selection in the Bayesian formalism using Sequential Monte-Carlo approaches. We outline the underlying rationale, discuss the computational and practical issues, and provide detailed guidance as to how the important tasks of parameter inference and model selection can be carried out in practice. Unlike other available packages, ABC-SysBio is highly suited for investigating in particular the challenging problem of fitting stochastic models to data. Although computationally expensive, the additional insights gained in the Bayesian formalism more than make up for this cost, especially in complex problems. PMID:24457334
Biedermann, A; Bozza, S; Taroni, F
2011-03-20
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models--in the form of Bayesian networks--address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates usable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Huang, Susie Shih-Yin; Strathe, Anders Bjerring; Hung, Silas S O; Boston, Raymond C; Fadel, James G
2012-03-01
The biological function of selenium (Se) is determined by its form and concentration. Selenium is an essential micronutrient for all vertebrates, however, at environmental levels, it is a potent toxin. In the San Francisco Bay-Delta, Se pollution threatens top predatory fish, including white sturgeon. A multi-compartmental Bayesian hierarchical model was developed to estimate the fractional rates of absorption, disposition, and elimination of selenocompounds, in white sturgeon, from tissue measurements obtained in a previous study (Huang et al., 2012). This modeling methodology allows for a population based approach to estimate kinetic physiological parameters in white sturgeon. Briefly, thirty juvenile white sturgeon (five per treatment) were orally intubated with a control (no selenium) or a single dose of Se (500 μg Se/kg body weight) in the form of one inorganic (Selenite) or four organic selenocompounds: selenocystine (SeCys), l-selenomethionine (SeMet), Se-methylseleno-l-cysteine (MSeCys), or selenoyeast (SeYeast). Blood and urine Se were measured at intervals throughout the 48h post intubation period and eight tissues were sampled at 48 h. The model is composed of four state variables, conceptually the gut (Q1), blood (Q2), and tissue (Q3); and urine (Q0), all in units of μg Se. Six kinetics parameters were estimated: the fractional rates [1/h] of absorption, tissue disposition, tissue release, and urinary elimination (k12, k23, k32, and k20), the proportion of the absorbed dose eliminated through the urine (f20), and the distribution blood volume (V; percent body weight, BW). The parameter k12 was higher in sturgeon given the organic Se forms, in the descending order of MSeCys > SeMet > SeCys > Selenite > SeYeast. The parameters k23 and k32 followed similar patterns, and f20 was lowest in fish given MSeCys. Selenium form did not affect k20 or V. The parameter differences observed can be attributed to the different mechanisms of transmucosal transport
Gil, M Carmen Sánchez; Alfaro, Emilio J; Pérez, Enrique; Sarro, Luis M
2015-01-01
One of the fundamental goals of modern Astronomy is to estimate the physical parameters of galaxies from images in different spectral bands. We present a hierarchical Bayesian model for obtaining age maps from images in the \\Ha\\ line (taken with Taurus Tunable Filter (TTF)), ultraviolet band (far UV or FUV, from GALEX) and infrared bands (24, 70 and 160 microns ($\\mu$m), from Spitzer). As shown in S\\'anchez-Gil et al. (2011), we present the burst ages for young stellar populations in the nearby and nearly face on galaxy M74. As it is shown in the previous work, the \\Ha\\ to FUV flux ratio gives a good relative indicator of very recent star formation history (SFH). As a nascent star-forming region evolves, the \\Ha\\ line emission declines earlier than the UV continuum, leading to a decrease in the \\Ha\\/FUV ratio. Through a specific star-forming galaxy model (Starburst 99, SB99), we can obtain the corresponding theoretical ratio \\Ha\\ / FUV to compare with our observed flux ratios, and thus to estimate the ages of...
Institute of Scientific and Technical Information of China (English)
LI ZhiQiang; GUO BaoCheng; LI JunBing; HE ShunPing; CHEN YiYu
2008-01-01
The genus Sinocyclocheilus is distributed in Yun-Gui Plateau and its surrounding region only, within more than 10 cave species showing different degrees of degeneration of eyes and pigmentation with wonderful adaptations. To present, published morphological and molecular phylogenetic hypotheses of Sinocyclocheilus from prior works are very different and the relationships within the genus are still far from clear. We obtained the sequences of cytochrome b (cyt b) and NADH dehydrogenase subunit 4 (ND4) of 34 species within Sinocyclocheilus, which represent the most dense taxon sampling to date. We performed Bayesian mixed models analyses with this data set. Under this phylogenetic framework, we estimated the divergence times of recovered clades using different methods under relaxed molecular clock. Our phyloegentic results supported the monophyly of Sinocyclocheilus and showed that this genus could be subdivided into 6 major clades. In addition, an earlier finding demonstrating the polyphyletic of cave species and the most basal position of S. jii was corroborated. Relaxed divergence-time estimation suggested that Sinocyclocheilus originated at the late Miocene, about 11 million years ago (Ma), which is older than what have been assumed.
Energy Technology Data Exchange (ETDEWEB)
Maragakis, Paul; Ritort, Felix; Bustamante, Carlos; Karplus, Martin; Crooks, Gavin E.
2008-07-08
The Jarzynski equality and the fluctuation theorem relate equilibrium free energy differences to nonequilibrium measurements of the work. These relations extend to single-molecule experiments that have probed the finite-time thermodynamics of proteins and nucleic acids. The effects of experimental error and instrument noise have not been considered previously. Here, we present a Bayesian formalism for estimating free energy changes from nonequilibrium work measurements that compensates for instrument noise and combines data from multiple driving protocols. We reanalyze a recent set of experiments in which a single RNA hairpin is unfolded and refolded using optical tweezers at three different rates. Interestingly, the fastest and farthest-from-equilibrium measurements contain the least instrumental noise and, therefore, provide a more accurate estimate of the free energies than a few slow, more noisy, near-equilibrium measurements. The methods we propose here will extend the scope of single-molecule experiments; they can be used in the analysis of data from measurements with atomic force microscopy, optical, and magnetic tweezers
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene
2016-01-01
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891
Revisiting the 4-Parameter Item Response Model: Bayesian Estimation and Application.
Culpepper, Steven Andrew
2016-12-01
There has been renewed interest in Barton and Lord's (An upper asymptote for the three-parameter logistic item response model (Tech. Rep. No. 80-20). Educational Testing Service, 1981) four-parameter item response model. This paper presents a Bayesian formulation that extends Béguin and Glas (MCMC estimation and some model fit analysis of multidimensional IRT models. Psychometrika, 66 (4):541-561, 2001) and proposes a model for the four-parameter normal ogive (4PNO) model. Monte Carlo evidence is presented concerning the accuracy of parameter recovery. The simulation results support the use of less informative uniform priors for the lower and upper asymptotes, which is an advantage to prior research. Monte Carlo results provide some support for using the deviance information criterion and [Formula: see text] index to choose among models with two, three, and four parameters. The 4PNO is applied to 7491 adolescents' responses to a bullying scale collected under the 2005-2006 Health Behavior in School-Aged Children study. The results support the value of the 4PNO to estimate lower and upper asymptotes in large-scale surveys.
Wideband DOA Estimation via Sparse Bayesian Learning over a Khatri-Rao Dictionary
Directory of Open Access Journals (Sweden)
Yujian Pan
2015-06-01
Full Text Available This paper deals with the wideband direction-of-arrival (DOA estimation by exploiting the multiple measurement vectors (MMV based sparse Bayesian learning (SBL framework. First, the array covariance matrices at different frequency bins are focused to the reference frequency by the conventional focusing technique and then transformed into the vector form. Then a matrix called the Khatri-Rao dictionary is constructed by using the Khatri-Rao product and the multiple focused array covariance vectors are set as the new observations. DOA estimation is to find the sparsest representations of the new observations over the Khatri-Rao dictionary via SBL. The performance of the proposed method is compared with other well-known focusing based wideband algorithms and the Cramer-Rao lower bound (CRLB. The results show that it achieves higher resolution and accuracy and can reach the CRLB under relative demanding conditions. Moreover, the method imposes no restriction on the pattern of signal power spectral density and due to the increased number of rows of the dictionary, it can resolve more sources than sensors.
Lee, Duncan; Rushworth, Alastair; Sahu, Sujit K
2014-06-01
Estimation of the long-term health effects of air pollution is a challenging task, especially when modeling spatial small-area disease incidence data in an ecological study design. The challenge comes from the unobserved underlying spatial autocorrelation structure in these data, which is accounted for using random effects modeled by a globally smooth conditional autoregressive model. These smooth random effects confound the effects of air pollution, which are also globally smooth. To avoid this collinearity a Bayesian localized conditional autoregressive model is developed for the random effects. This localized model is flexible spatially, in the sense that it is not only able to model areas of spatial smoothness, but also it is able to capture step changes in the random effects surface. This methodological development allows us to improve the estimation performance of the covariate effects, compared to using traditional conditional auto-regressive models. These results are established using a simulation study, and are then illustrated with our motivating study on air pollution and respiratory ill health in Greater Glasgow, Scotland in 2011. The model shows substantial health effects of particulate matter air pollution and nitrogen dioxide, whose effects have been consistently attenuated by the currently available globally smooth models.
Gerstgrasser, Matthias; Nicholls, Sarah; Stout, Michael; Smart, Katherine; Powell, Chris; Kypraios, Theodore; Stekel, Dov
2016-06-01
Biolog phenotype microarrays (PMs) enable simultaneous, high throughput analysis of cell cultures in different environments. The output is high-density time-course data showing redox curves (approximating growth) for each experimental condition. The software provided with the Omnilog incubator/reader summarizes each time-course as a single datum, so most of the information is not used. However, the time courses can be extremely varied and often contain detailed qualitative (shape of curve) and quantitative (values of parameters) information. We present a novel, Bayesian approach to estimating parameters from Phenotype Microarray data, fitting growth models using Markov Chain Monte Carlo (MCMC) methods to enable high throughput estimation of important information, including length of lag phase, maximal "growth" rate and maximum output. We find that the Baranyi model for microbial growth is useful for fitting Biolog data. Moreover, we introduce a new growth model that allows for diauxic growth with a lag phase, which is particularly useful where Phenotype Microarrays have been applied to cells grown in complex mixtures of substrates, for example in industrial or biotechnological applications, such as worts in brewing. Our approach provides more useful information from Biolog data than existing, competing methods, and allows for valuable comparisons between data series and across different models.
Dumitru, Mircea; Mohammad-Djafari, Ali; Sain, Simona Baghai
2016-12-01
The toxicity and efficacy of more than 30 anticancer agents present very high variations, depending on the dosing time. Therefore, the biologists studying the circadian rhythm require a very precise method for estimating the periodic component (PC) vector of chronobiological signals. Moreover, in recent developments, not only the dominant period or the PC vector present a crucial interest but also their stability or variability. In cancer treatment experiments, the recorded signals corresponding to different phases of treatment are short, from 7 days for the synchronization segment to 2 or 3 days for the after-treatment segment. When studying the stability of the dominant period, we have to consider very short length signals relative to the prior knowledge of the dominant period, placed in the circadian domain. The classical approaches, based on Fourier transform (FT) methods are inefficient (i.e., lack of precision) considering the particularities of the data (i.e., the short length). Another particularity of the signals considered in such experiments is the level of noise: such signals are very noisy and establishing the periodic components that are associated with the biological phenomena and distinguishing them from the ones associated with the noise are difficult tasks. In this paper, we propose a new method for the estimation of the PC vector of biomedical signals, using the biological prior informations and considering a model that accounts for the noise. The experiments developed in cancer treatment context are recording signals expressing a limited number of periods. This is a prior information that can be translated as the sparsity of the PC vector. The proposed method considers the PC vector estimation as an Inverse Problem (IP) using the general Bayesian inference in order to infer the unknown of our model, i.e. the PC vector but also the hyperparameters (i.e the variances). The sparsity prior information is modeled using a sparsity enforcing prior law
Bayesian Estimation of Speciation and Extinction from Incomplete Fossil Occurrence Data
Silvestro, Daniele; Schnitzler, Jan; Liow, Lee Hsiang; Antonelli, Alexandre; Salamin, Nicolas
2015-01-01
The temporal dynamics of species diversity are shaped by variations in the rates of speciation and extinction, and there is a long history of inferring these rates using first and last appearances of taxa in the fossil record. Understanding diversity dynamics critically depends on unbiased estimates of the unobserved times of speciation and extinction for all lineages, but the inference of these parameters is challenging due to the complex nature of the available data. Here, we present a new probabilistic framework to jointly estimate species-specific times of speciation and extinction and the rates of the underlying birth-death process based on the fossil record. The rates are allowed to vary through time independently of each other, and the probability of preservation and sampling is explicitly incorporated in the model to estimate the true lifespan of each lineage. We implement a Bayesian algorithm to assess the presence of rate shifts by exploring alternative diversification models. Tests on a range of simulated data sets reveal the accuracy and robustness of our approach against violations of the underlying assumptions and various degrees of data incompleteness. Finally, we demonstrate the application of our method with the diversification of the mammal family Rhinocerotidae and reveal a complex history of repeated and independent temporal shifts of both speciation and extinction rates, leading to the expansion and subsequent decline of the group. The estimated parameters of the birth-death process implemented here are directly comparable with those obtained from dated molecular phylogenies. Thus, our model represents a step towards integrating phylogenetic and fossil information to infer macroevolutionary processes. PMID:24510972
Directory of Open Access Journals (Sweden)
Kamaljit Kaur
2015-01-01
Full Text Available Bayesian estimators of Gini index and a Poverty measure are obtained in case of Pareto distribution under censored and complete setup. The said estimators are obtained using two noninformative priors, namely, uniform prior and Jeffreys’ prior, and one conjugate prior under the assumption of Linear Exponential (LINEX loss function. Using simulation techniques, the relative efficiency of proposed estimators using different priors and loss functions is obtained. The performances of the proposed estimators have been compared on the basis of their simulated risks obtained under LINEX loss function.
Ershadi, Ali
2013-05-01
The influence of uncertainty in land surface temperature, air temperature, and wind speed on the estimation of sensible heat flux is analyzed using a Bayesian inference technique applied to the Surface Energy Balance System (SEBS) model. The Bayesian approach allows for an explicit quantification of the uncertainties in input variables: a source of error generally ignored in surface heat flux estimation. An application using field measurements from the Soil Moisture Experiment 2002 is presented. The spatial variability of selected input meteorological variables in a multitower site is used to formulate the prior estimates for the sampling uncertainties, and the likelihood function is formulated assuming Gaussian errors in the SEBS model. Land surface temperature, air temperature, and wind speed were estimated by sampling their posterior distribution using a Markov chain Monte Carlo algorithm. Results verify that Bayesian-inferred air temperature and wind speed were generally consistent with those observed at the towers, suggesting that local observations of these variables were spatially representative. Uncertainties in the land surface temperature appear to have the strongest effect on the estimated sensible heat flux, with Bayesian-inferred values differing by up to ±5°C from the observed data. These differences suggest that the footprint of the in situ measured land surface temperature is not representative of the larger-scale variability. As such, these measurements should be used with caution in the calculation of surface heat fluxes and highlight the importance of capturing the spatial variability in the land surface temperature: particularly, for remote sensing retrieval algorithms that use this variable for flux estimation.
Localization of deformable tumors from short-arc projections using Bayesian estimation
Energy Technology Data Exchange (ETDEWEB)
Hoegele, W.; Zygmanski, P.; Dobler, B.; Kroiss, M.; Koelbl, O.; Loeschel, R. [Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany) and Department of Computer Science and Mathematics, University of Applied Sciences, 93053 Regensburg (Germany); Department of Radiation Oncology, Brigham and Women' s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States); Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany); Department of Radiation Oncology, Hospital of the Sisters of Mercy, 4010 Linz (Austria); Department of Radiation Oncology, Regensburg University Medical Center, 93053 Regensburg (Germany); Department of Computer Science and Mathematics, University of Applied Sciences, 93053 Regensburg (Germany)
2012-12-15
Purpose: The authors present a stochastic framework for radiotherapy patient positioning directly utilizing radiographic projections. This framework is developed to be robust against anatomical nonrigid deformations and to cope with challenging imaging scenarios, involving only a few cone beam CT projections from short arcs. Methods: Specifically, a Bayesian estimator (BE) is explicitly derived for the given scanning geometry. This estimator is compared to reference methods such as chamfer matching (CM) and the minimization of the median absolute error adapted as tools of robust image processing and statistics. In order to show the performance of the stochastic short-arc patient positioning method, a CIRS IMRT thorax phantom study is presented with movable markers and the utilization of an Elekta Synergy{sup Registered-Sign} XVI system. Furthermore, a clinical prostate CBCT scan of a Varian{sup Registered-Sign} On-Board Imager{sup Registered-Sign} system is utilized to investigate the robustness of the method for large variations of image quality (anterior-posterior vs lateral views). Results: The results show that the BE shifts reduce the initial setup error of up to 3 cm down to 3 mm at maximum for an imaging arc as short as 10 Degree-Sign while CM achieves residual errors of 7 mm at maximum only for arcs longer than 40 Degree-Sign . Furthermore, the BE can compensate robustly for low image qualities using several low quality projections simultaneously. Conclusions: In conclusion, an estimation method for marker-based patient positioning for short imaging arcs is presented and shown to be robust and accurate for deformable anatomies.
Eadie, Gwendolyn; Harris, William E.; Springford, Aaron; Widrow, Larry
2017-01-01
The mass and cumulative mass profile of the Milky Way's dark matter halo is a fundamental property of the Galaxy, and yet these quantities remain poorly constrained and span almost two orders of magnitude in the literature. There are a variety of methods to measure the mass of the Milky Way, and a common way to constrain the mass uses kinematic information of satellite objects (e.g. globular clusters) orbiting the Galaxy. One reason precise estimates of the mass and mass profile remain elusive is that the kinematic data of the globular clusters are incomplete; for some both line-of-sight and proper motion measurements are available (i.e. complete data), and for others there are only line-of-sight velocities (i.e. incomplete data). Furthermore, some proper motion measurements suffer from large measurement uncertainties, and these uncertainties can be difficult to take into account because they propagate in complicated ways. Past methods have dealt with incomplete data by using either only the line-of-sight measurements (and throwing away the proper motions), or only using the complete data. In either case, valuable information is not included in the analysis. During my PhD research, I have been developing a coherent hierarchical Bayesian method to estimate the mass and mass profile of the Galaxy that 1) includes both complete and incomplete kinematic data simultaneously in the analysis, and 2) includes measurement uncertainties in a meaningful way. In this presentation, I will introduce our approach in a way that is accessible and clear, and will also present our estimates of the Milky Way's total mass and mass profile using all available kinematic data from the globular cluster population of the Galaxy.
Introducing Bayesian thinking to high-throughput screening for false-negative rate estimation.
Wei, Xin; Gao, Lin; Zhang, Xiaolei; Qian, Hong; Rowan, Karen; Mark, David; Peng, Zhengwei; Huang, Kuo-Sen
2013-10-01
High-throughput screening (HTS) has been widely used to identify active compounds (hits) that bind to biological targets. Because of cost concerns, the comprehensive screening of millions of compounds is typically conducted without replication. Real hits that fail to exhibit measurable activity in the primary screen due to random experimental errors will be lost as false-negatives. Conceivably, the projected false-negative rate is a parameter that reflects screening quality. Furthermore, it can be used to guide the selection of optimal numbers of compounds for hit confirmation. Therefore, a method that predicts false-negative rates from the primary screening data is extremely valuable. In this article, we describe the implementation of a pilot screen on a representative fraction (1%) of the screening library in order to obtain information about assay variability as well as a preliminary hit activity distribution profile. Using this training data set, we then developed an algorithm based on Bayesian logic and Monte Carlo simulation to estimate the number of true active compounds and potential missed hits from the full library screen. We have applied this strategy to five screening projects. The results demonstrate that this method produces useful predictions on the numbers of false negatives.
Tak, Hyungsuk; van Dyk, David A; Kashyap, Vinay L; Meng, Xiao-Li; Siemiginowska, Aneta
2016-01-01
The gravitational field of a galaxy can act as a lens and deflect the light emitted by a more distant object such as a quasar. If the galaxy is a strong gravitational lens, it can produce multiple images of the same quasar in the sky. Since the light in each gravitationally lensed image traverses a different path length from the quasar to the Earth, fluctuations in the source brightness are observed in the several images at different times. The time delay between these fluctuations can be used to constrain cosmological parameters and can be inferred from the time series of brightness data or light curves of each image. To estimate the time delay, we construct a model based on a state-space representation for irregularly observed time series generated by a latent continuous-time Ornstein-Uhlenbeck process. We account for microlensing, an additional source of independent long-term extrinsic variability, via a polynomial regression. Our Bayesian strategy adopts a Metropolis-Hastings within Gibbs sampler. We impr...
Jennings, Elise
2016-01-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated data sets, the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of jobs across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communica...
Nonlinear Bayesian Estimation of BOLD Signal under Non-Gaussian Noise
Directory of Open Access Journals (Sweden)
Ali Fahim Khan
2015-01-01
Full Text Available Modeling the blood oxygenation level dependent (BOLD signal has been a subject of study for over a decade in the neuroimaging community. Inspired from fluid dynamics, the hemodynamic model provides a plausible yet convincing interpretation of the BOLD signal by amalgamating effects of dynamic physiological changes in blood oxygenation, cerebral blood flow and volume. The nonautonomous, nonlinear set of differential equations of the hemodynamic model constitutes the process model while the weighted nonlinear sum of the physiological variables forms the measurement model. Plagued by various noise sources, the time series fMRI measurement data is mostly assumed to be affected by additive Gaussian noise. Though more feasible, the assumption may cause the designed filter to perform poorly if made to work under non-Gaussian environment. In this paper, we present a data assimilation scheme that assumes additive non-Gaussian noise, namely, the e-mixture noise, affecting the measurements. The proposed filter MAGSF and the celebrated EKF are put to test by performing joint optimal Bayesian filtering to estimate both the states and parameters governing the hemodynamic model under non-Gaussian environment. Analyses using both the synthetic and real data reveal superior performance of the MAGSF as compared to EKF.
Ebrahimian, Hossein; Jalayer, Fatemeh
2017-08-29
In the immediate aftermath of a strong earthquake and in the presence of an ongoing aftershock sequence, scientific advisories in terms of seismicity forecasts play quite a crucial role in emergency decision-making and risk mitigation. Epidemic Type Aftershock Sequence (ETAS) models are frequently used for forecasting the spatio-temporal evolution of seismicity in the short-term. We propose robust forecasting of seismicity based on ETAS model, by exploiting the link between Bayesian inference and Markov Chain Monte Carlo Simulation. The methodology considers the uncertainty not only in the model parameters, conditioned on the available catalogue of events occurred before the forecasting interval, but also the uncertainty in the sequence of events that are going to happen during the forecasting interval. We demonstrate the methodology by retrospective early forecasting of seismicity associated with the 2016 Amatrice seismic sequence activities in central Italy. We provide robust spatio-temporal short-term seismicity forecasts with various time intervals in the first few days elapsed after each of the three main events within the sequence, which can predict the seismicity within plus/minus two standard deviations from the mean estimate within the few hours elapsed after the main event.
A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation.
Oliver, Antoni; Canals, Vincent; Rosselló, Josep L
2017-03-06
Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule's pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.
Bayesian network for estimating the interaction between ecological health and waterfowl abundance
Teng, Te Hui; Fang, Wei Ta; Yu, Hwa Lung
2013-04-01
The serious decrease of biodiversity which is mainly induced by Habitat disappear is important issue of species field and in the world. The study area chooses Tauyuan County at subtropical area because of the most artificial farm ponds in Taiwan where the total area includes 27 km2. The effectiveness of these ponds is storage and irrigation and also supplies all kinds of environment like refuges for migratory birds, especially for water birds. Due to human development, farm ponds in this city not only suffer from largely disappear recent year, but also lead to the habitat and bird species reduce. Biological research usually contains incomplete and uncertain information, therefore, this study adopts Bayesian Network model to analyze interaction between land use and water birds. The habitat parameters include elevation, urbanization, building area, farm area, reconsolidation, forest area, irrigation area, farm pond area and lawn area; the biological factors have reproductive capacity, habitat condition, hydrological condition and food source. Using this structure can estimate the interaction of spatiotemporal abundance distribution between habitat parameter and biological parameter. In addition, the former results can define all the reasonable relationship of all hidden states and provide decision-makers with reasonable evaluation.
A Bayesian estimate of the CMB-large scale structure cross-correlation
Santos, E Moura; Penna-Lima, M; Novaes, C P; Wuensche, C A
2015-01-01
Evidences for late-time acceleration of the Universe are provided by multiple complementary probes, such as observations of distant Type Ia supernovae (SNIa), cosmic microwave background (CMB), baryon acoustic oscillations (BAO), large scale structure (LSS), and the integrated Sachs-Wolfe (ISW) effect. In this work we shall focus on the ISW effect, which consists of small secondary fluctuations in the CMB produced whenever the gravitational potentials evolve due to transitions between dominating fluids, e.g., matter to dark energy dominated phase. Therefore, if we assume a flat universe, as supported by primary CMB data, then a detection of the ISW effect can be correlated to a measurement of dark energy and its properties. In this work, we present a Bayesian estimate of the CMB-LSS cross-correlation signal. As local tracers of the matter distribution at large scales we have used the Two Micron All Sky Survey (2MASS) galaxy catalog and, for the CMB temperature fluctuations, the nine-year data release of the W...
A Bayesian Target Predictor Method based on Molecular Pairing Energies estimation
Oliver, Antoni; Canals, Vincent; Rosselló, Josep L.
2017-03-01
Virtual screening (VS) is applied in the early drug discovery phases for the quick inspection of huge molecular databases to identify those compounds that most likely bind to a given drug target. In this context, there is the necessity of the use of compact molecular models for database screening and precise target prediction in reasonable times. In this work we present a new compact energy-based model that is tested for its application to Virtual Screening and target prediction. The model can be used to quickly identify active compounds in huge databases based on the estimation of the molecule’s pairing energies. The greatest molecular polar regions along with its geometrical distribution are considered by using a short set of smart energy vectors. The model is tested using similarity searches within the Directory of Useful Decoys (DUD) database. The results obtained are considerably better than previously published models. As a Target prediction methodology we propose the use of a Bayesian Classifier that uses a combination of different active compounds to build an energy-dependent probability distribution function for each target.
Minimum mean square error estimation and approximation of the Bayesian update
Litvinenko, Alexander
2015-01-07
Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases
Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal
2017-04-01
The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable
Directory of Open Access Journals (Sweden)
Li Sen
2012-03-01
Full Text Available Abstract Background The Approximate Bayesian Computation (ABC approach has been used to infer demographic parameters for numerous species, including humans. However, most applications of ABC still use limited amounts of data, from a small number of loci, compared to the large amount of genome-wide population-genetic data which have become available in the last few years. Results We evaluated the performance of the ABC approach for three 'population divergence' models - similar to the 'isolation with migration' model - when the data consists of several hundred thousand SNPs typed for multiple individuals by simulating data from known demographic models. The ABC approach was used to infer demographic parameters of interest and we compared the inferred values to the true parameter values that was used to generate hypothetical "observed" data. For all three case models, the ABC approach inferred most demographic parameters quite well with narrow credible intervals, for example, population divergence times and past population sizes, but some parameters were more difficult to infer, such as population sizes at present and migration rates. We compared the ability of different summary statistics to infer demographic parameters, including haplotype and LD based statistics, and found that the accuracy of the parameter estimates can be improved by combining summary statistics that capture different parts of information in the data. Furthermore, our results suggest that poor choices of prior distributions can in some circumstances be detected using ABC. Finally, increasing the amount of data beyond some hundred loci will substantially improve the accuracy of many parameter estimates using ABC. Conclusions We conclude that the ABC approach can accommodate realistic genome-wide population genetic data, which may be difficult to analyze with full likelihood approaches, and that the ABC can provide accurate and precise inference of demographic parameters from
Real time bayesian estimation of the epidemic potential of emerging infectious diseases.
Directory of Open Access Journals (Sweden)
Luís M A Bettencourt
Full Text Available BACKGROUND: Fast changes in human demographics worldwide, coupled with increased mobility, and modified land uses make the threat of emerging infectious diseases increasingly important. Currently there is worldwide alert for H5N1 avian influenza becoming as transmissible in humans as seasonal influenza, and potentially causing a pandemic of unprecedented proportions. Here we show how epidemiological surveillance data for emerging infectious diseases can be interpreted in real time to assess changes in transmissibility with quantified uncertainty, and to perform running time predictions of new cases and guide logistics allocations. METHODOLOGY/PRINCIPAL FINDINGS: We develop an extension of standard epidemiological models, appropriate for emerging infectious diseases, that describes the probabilistic progression of case numbers due to the concurrent effects of (incipient human transmission and multiple introductions from a reservoir. The model is cast in terms of surveillance observables and immediately suggests a simple graphical estimation procedure for the effective reproductive number R (mean number of cases generated by an infectious individual of standard epidemics. For emerging infectious diseases, which typically show large relative case number fluctuations over time, we develop a bayesian scheme for real time estimation of the probability distribution of the effective reproduction number and show how to use such inferences to formulate significance tests on future epidemiological observations. CONCLUSIONS/SIGNIFICANCE: Violations of these significance tests define statistical anomalies that may signal changes in the epidemiology of emerging diseases and should trigger further field investigation. We apply the methodology to case data from World Health Organization reports to place bounds on the current transmissibility of H5N1 influenza in humans and establish a statistical basis for monitoring its evolution in real time.
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called ;Likelihood free; as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted
A Bayesian Estimate of the CMB-Large-scale Structure Cross-correlation
Moura-Santos, E.; Carvalho, F. C.; Penna-Lima, M.; Novaes, C. P.; Wuensche, C. A.
2016-08-01
Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs-Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB-LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe (WMAP9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.
Kittisuwan, Pichid
2015-03-01
The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.
Chun, Kwang-Soo; Lee, Yong-Taek; Park, Jong-Wan; Lee, Joon-Youn; Park, Chul-Hyun; Yoon, Kyung Jae
2016-02-01
To compare diffusion tensor tractography (DTT) and motor evoked potentials (MEPs) for estimation of clinical status in patients in the subacute stage of stroke. Patients with hemiplegia due to stroke who were evaluated using both DTT and MEPs between May 2012 and April 2015 were recruited. Clinical assessments investigated upper extremity motor and functional status. Motor status was evaluated using Medical Research Council grading and the Fugl-Meyer Assessment of upper limb and hand (FMA-U and FMA-H). Functional status was measured using the Modified Barthel Index (MBI). Patients were classified into subgroups according to DTT findings, MEP presence, fractional anisotropy (FA) value, FA ratio (rFA), and central motor conduction time (CMCT). Correlations of clinical assessments with DTT parameters and MEPs were estimated. Fifty-five patients with hemiplegia were recruited. In motor assessments (FMA-U), MEPs had the highest sensitivity and negative predictive value (NPV) as well as the second highest specificity and positive predictive value (PPV). CMCT showed the highest specificity and PPV. Regarding functional status (MBI), FA showed the highest sensitivity and NPV, whereas CMCT had the highest specificity and PPV. Correlation analysis showed that the resting motor threshold (RMT) ratio was strongly associated with motor status of the upper limb, and MEP parameters were not associated with MBI. DTT and MEPs could be suitable complementary modalities for analyzing the motor and functional status of patients in the subacute stage of stroke. The RMT ratio was strongly correlated with motor status.
National Aeronautics and Space Administration — The application of the Bayesian theory of managing uncertainty and complexity to regression and classification in the form of Relevance Vector Machine (RVM), and to...
Carvalho, Pedro; Marques, Rui Cunha
2016-02-15
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services.
Kim, J.; Kwon, H. H.
2014-12-01
The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, This study aims to develop a hierarchical Bayesian model based regional frequency analysis in that spatial patterns of the design rainfall with geographical information are explicitly incorporated. This study assumes that the parameters of Gumbel distribution are a function of geographical characteristics (e.g. altitude, latitude and longitude) within a general linear regression framework. Posterior distributions of the regression parameters are estimated by Bayesian Markov Chain Monte Calro (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the Gumbel distribution by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Acknowledgement: This research was supported by a grant (14AWMP-B079364-01) from Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.
2008-12-01
Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the
Energy Technology Data Exchange (ETDEWEB)
Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.
1996-12-31
Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.
Modeling the vertical soil organic matter profile using Bayesian parameter estimation
Directory of Open Access Journals (Sweden)
M. C. Braakhekke
2013-01-01
Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute an important factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperate forest soils: an Arenosol with a mor humus form (Loobos, the Netherlands, and a Cambisol with mull-type humus (Hainich, Germany. Furthermore, the use of the radioisotope ^{210}Pb_{ex} as tracer for vertical SOM transport was studied. For Loobos, the calibration results demonstrate the importance of organic matter transport with the liquid phase for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich, the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of ^{210}Pb_{ex} data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which indicated that root litter input is a dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The ^{210 }
Modeling the vertical soil organic matter profile using Bayesian parameter estimation
Directory of Open Access Journals (Sweden)
M. C. Braakhekke
2012-08-01
Full Text Available The vertical distribution of soil organic matter (SOM in the profile may constitute a significant factor for soil carbon cycling. However, the formation of the SOM profile is currently poorly understood due to equifinality, caused by the entanglement of several processes: input from roots, mixing due to bioturbation, and organic matter leaching. In this study we quantified the contribution of these three processes using Bayesian parameter estimation for the mechanistic SOM profile model SOMPROF. Based on organic carbon measurements, 13 parameters related to decomposition and transport of organic matter were estimated for two temperature forest soils: an Arenosol with a mor humus form (Loobos, The Netherlands, and a Cambisol with mull type humus (Hainich, Germany. Furthermore, the use of the radioisotope ^{210}Pb_{ex} as tracer for vertical SOM transport was studied.
For Loobos the calibration results demonstrate the importance of liquid phase transport for shaping the vertical SOM profile, while the effects of bioturbation are generally negligible. These results are in good agreement with expectations given in situ conditions. For Hainich the calibration offered three distinct explanations for the observations (three modes in the posterior distribution. With the addition of ^{210}Pb_{ex} data and prior knowledge, as well as additional information about in situ conditions, we were able to identify the most likely explanation, which identified root litter input as the dominant process for the SOM profile. For both sites the organic matter appears to comprise mainly adsorbed but potentially leachable material, pointing to the importance of organo-mineral interactions. Furthermore, organic matter in the mineral soil appears to be mainly derived from root litter, supporting previous studies that highlighted the importance of root input for soil carbon sequestration. The ^{210
}
McAloon, Conor G; Doherty, Michael L; Whyte, Paul; O'Grady, Luke; More, Simon J; Messam, Locksley L McV; Good, Margaret; Mullowney, Peter; Strain, Sam; Green, Martin J
2016-06-01
Bovine paratuberculosis is a disease characterised by chronic granulomatous enteritis which manifests clinically as a protein-losing enteropathy causing diarrhoea, hypoproteinaemia, emaciation and, eventually death. Some evidence exists to suggest a possible zoonotic link and a national voluntary Johne's Disease Control Programme was initiated by Animal Health Ireland in 2013. The objective of this study was to estimate herd-level true prevalence (HTP) and animal-level true prevalence (ATP) of paratuberculosis in Irish herds enrolled in the national voluntary JD control programme during 2013-14. Two datasets were used in this study. The first dataset had been collected in Ireland during 2005 (5822 animals from 119 herds), and was used to construct model priors. Model priors were updated with a primary (2013-14) dataset which included test records from 99,101 animals in 1039 dairy herds and was generated as part of the national voluntary JD control programme. The posterior estimate of HTP from the final Bayesian model was 0.23-0.34 with a 95% probability. Across all herds, the median ATP was found to be 0.032 (0.009, 0.145). This study represents the first use of Bayesian methodology to estimate the prevalence of paratuberculosis in Irish dairy herds. The HTP estimate was higher than previous Irish estimates but still lower than estimates from other major dairy producing countries.
Energy Technology Data Exchange (ETDEWEB)
Carvalho, Pedro, E-mail: pedrocarv@coc.ufrj.br [Computational Modelling in Engineering and Geophysics Laboratory (LAMEMO), Department of Civil Engineering, COPPE, Federal University of Rio de Janeiro, Av. Pedro Calmon - Ilha do Fundão, 21941-596 Rio de Janeiro (Brazil); Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal); Marques, Rui Cunha, E-mail: pedro.c.carvalho@tecnico.ulisboa.pt [Center for Urban and Regional Systems (CESUR), CERIS, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisbon (Portugal)
2016-02-15
This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. - Highlights: • This study aims to search for economies of size and scope in the water sector; • The usefulness of the application of Bayesian methods is highlighted; • Important economies of output density, economies of size, economies of vertical integration and economies of scope are found.
Uncertainty of mass discharge estimates from contaminated sites using a fully Bayesian framework
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, Wolfgang; Binning, Philip John
2011-01-01
plane. The method accounts for: (1) conceptual model uncertainty through Bayesian model averaging, (2) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. An ensemble of unconditional steady-state plume realizations is generated through...... Monte Carlo simulation. By use of the Kalman Ensemble Generator, these realizations are conditioned on site-specific data. Hereby a posterior ensemble of realizations, all honouring the measured data at the control plane, is generated for each of the conceptual models considered. The ensembles from...
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, W.; Binning, Philip John
2010-01-01
for quantifying the uncertainty in the mass discharge across a multilevel control plane. The method is based on geostatistical inverse modelling and accounts for i) conceptual model uncertainty through multiple conceptual models and Bayesian model averaging, ii) heterogeneity through Bayesian geostatistics...... with an uncertain geostatistical model and iii) measurement uncertainty. The method is tested on a TCE contaminated site for which four different conceptual models were set up. The mass discharge and the associated uncertainty are hereby determined. It is discussed which of the conceptual models is most likely...
Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin;
2015-01-01
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...
Salinas, Jose Luis; Kiss, Andrea; Viglione, Alberto; Blöschl, Günter
2016-04-01
Efforts of the historical environmental extremes community during the last decades have resulted in the obtention of long time series of historical floods, which in some cases range longer than 500 years in the past. In hydrological engineering, historical floods are useful because they give additional information which improves the estimates of discharges with low annual exceedance probabilities, i.e. with high return periods, and additionally might reduce the uncertainty in those estimates. In order to use the historical floods in formal flood frequency analysis, the precise value of the peak discharges would ideally be known, but in most of the cases, the information related to historical floods is given, quantitatively, in a non-precise manner. This work presents an approach on how to deal with the non-precise historical floods, by linking the descriptions in historical records to fuzzy numbers representing discharges. These fuzzy historical discharges are then introduced in a formal Bayesian inference framework, taking into account the arithmetics of non-precise numbers modelled by fuzzy logic theory, to obtain a fuzzy version of the flood frequency curve combining the fuzzy historical flood events and the instrumental data for a given location. Two case studies are selected from the historical literature, representing different facets of the fuzziness present in the historical sources. The results from the cases studies are given in the form of the fuzzy estimates of the flood frequency curves together with the fuzzy 5% and 95% Bayesian credibility bounds for these curves. The presented fuzzy Bayesian inference framework provides a flexible methodology to propagate in an explicit way the imprecision from the historical records into the flood frequency estimate, which allows to assess the effect that the incorporation of non-precise historical information can have in the flood frequency regime.
Estimation of mutation rates from paternity cases using a Bayesian network
DEFF Research Database (Denmark)
Vicard, P.; Dawid, A.P.; Mortera, J.
and paternal mutation rates, while allowing a wide variety of mutation models. A Bayesian network is constructed to facilitate computation of the likelihood function for the mutation parameters. It can process both full and summary genotypic information, from both complete putative father-mother-child triplets...
Direct Estimation of the Minimum RSS Value for Training Bayesian Knowledge Tracing Parameters
Martori, Francesc; Cuadros, Jordi; González-Sabaté, Lucinio
2015-01-01
Student modeling can help guide the behavior of a cognitive tutor system and provide insight to researchers on understanding how students learn. In this context, Bayesian Knowledge Tracing (BKT) is one of the most popular knowledge inference models due to its predictive accuracy, interpretability and ability to infer student knowledge. However,…
A comparison of least-squares and Bayesian minimum risk edge parameter estimation
Mulder, Nanno J.; Abkar, Ali A.
1999-01-01
The problem considered here is to compare two methods for finding a common boundary between two objects with two unknown geometric parameters, such as edge position and edge orientation. We compare two model-based approaches: the least squares and the minimum Bayesian risk method. An expression is d
A method for real-time condition monitoring of haul roads based on Bayesian parameter estimation
CSIR Research Space (South Africa)
Heyns, T
2012-04-01
Full Text Available and to the vehicles. A recent idea is that vehicle on-board data collection systems could be used to monitor haul roads on a real-time basis by means of vibration signature analysis. This paper proposes a methodology based on Bayesian regression to isolate the effect...
Directory of Open Access Journals (Sweden)
M. Tahir
2016-12-01
Full Text Available As compared to simple models, the mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process. This study focuses on the problem of estimating the parameters of a newly developed 3-component mixture of Burr Type-XII distributions using Type-I right censored data. Firstly, considering a Bayesian structure, some mathematical properties of a 3-component mixture of Burr Type-XII distributions are discussed. These mathematical properties include Bayes estimators and posterior risks for the unknown component and proportion parameters using the non-informative and the informative priors under squared error loss function, precautionary loss function and DeGroot loss function. Secondly, in case when no or little prior information is available, elicitation of hyperparameters is given. Also, the posterior predictive distribution for a future observation and the Bayesian predictive interval are constructed. Moreover, the limiting expressions for the Bayes estimators and posterior risks are derived. In addition, the performance of the Bayes estimators for different sample sizes, test termination times and parametric values under different loss functions is investigated. Finally, simulated datasets are designed for the different comparisons and the model is illustrated using the real data.
Bouwknegt, M; Engel, B; Herremans, M M P T; Widdowson, M A; Worm, H C; Koopmans, M P G; Frankena, K; de Roda Husman, A M; De Jong, M C M; Van Der Poel, W H M
2008-04-01
Hepatitis E virus (HEV) is ubiquitous in pigs worldwide and may be zoonotic. Previous HEV seroprevalence estimates for groups of people working with swine were higher than for control groups. However, discordance among results of anti-HEV assays means that true seroprevalence estimates, i.e. seroprevalence due to previous exposure to HEV, depends on choice of seroassay. We tested blood samples from three subpopulations (49 swine veterinarians, 153 non-swine veterinarians and 644 randomly selected individuals from the general population) with one IgM and two IgG ELISAs, and subsets with IgG and/or IgM Western blots. A Bayesian stochastical model was used to combine results of all assays. The model accounted for imperfection of each assay by estimating sensitivity and specificity, and accounted for dependence between serological assays. As expected, discordance among assay results occurred. Applying the model yielded seroprevalence estimates of approximately 11% for swine veterinarians,approximately 6% for non-swine veterinarians and approximately 2% for the general population. By combining the results of five serological assays in a Bayesian stochastical model we confirmed that exposure to swine or their environment was associated with elevated HEV seroprevalence.
Khoshravesh, Mojtaba; Sefidkouhi, Mohammad Ali Gholami; Valipour, Mohammad
2017-07-01
The proper evaluation of evapotranspiration is essential in food security investigation, farm management, pollution detection, irrigation scheduling, nutrient flows, carbon balance as well as hydrologic modeling, especially in arid environments. To achieve sustainable development and to ensure water supply, especially in arid environments, irrigation experts need tools to estimate reference evapotranspiration on a large scale. In this study, the monthly reference evapotranspiration was estimated by three different regression models including the multivariate fractional polynomial (MFP), robust regression, and Bayesian regression in Ardestan, Esfahan, and Kashan. The results were compared with Food and Agriculture Organization (FAO)-Penman-Monteith (FAO-PM) to select the best model. The results show that at a monthly scale, all models provided a closer agreement with the calculated values for FAO-PM ( R 2 > 0.95 and RMSE < 12.07 mm month-1). However, the MFP model gives better estimates than the other two models for estimating reference evapotranspiration at all stations.
Application of bayesian networks to real-time flood risk estimation
Garrote, L.; Molina, M.; Blasco, G.
2003-04-01
This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models
DEFF Research Database (Denmark)
Christensen, Lars P.B.; Larsen, Jan
2006-01-01
A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prio...
Villalba, Jesús
2015-01-01
In this document we are going to derive the equations needed to implement a Variational Bayes estimation of the parameters of the simplified probabilistic linear discriminant analysis (SPLDA) model. This can be used to adapt SPLDA from one database to another with few development data or to implement the fully Bayesian recipe. Our approach is similar to Bishop's VB PPCA.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-03-06
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision
Moran, Emily V; Clark, James S
2011-03-01
The scale of seed and pollen movement in plants has a critical influence on population dynamics and interspecific interactions, as well as on their capacity to respond to environmental change through migration or local adaptation. However, dispersal can be challenging to quantify. Here, we present a Bayesian model that integrates genetic and ecological data to simultaneously estimate effective seed and pollen dispersal parameters and the parentage of sampled seedlings. This model is the first developed for monoecious plants that accounts for genotyping error and treats dispersal from within and beyond a plot in a fully consistent manner. The flexible Bayesian framework allows the incorporation of a variety of ecological variables, including individual variation in seed production, as well as multiple sources of uncertainty. We illustrate the method using data from a mixed population of red oak (Quercus rubra, Q. velutina, Q. falcata) in the NC piedmont. For simulated test data sets, the model successfully recovered the simulated dispersal parameters and pedigrees. Pollen dispersal in the example population was extensive, with an average father-mother distance of 178 m. Estimated seed dispersal distances at the piedmont site were substantially longer than previous estimates based on seed-trap data (average 128 m vs. 9.3 m), suggesting that, under some circumstances, oaks may be less dispersal-limited than is commonly thought, with a greater potential for range shifts in response to climate change.
Ghajarnia, Navid; Arasteh, Peyman D.; Araghinejad, Shahab; Liaghat, Majid A.
2016-07-01
Incorrect estimation of rainfall occurrence, so called False Alarm (FA) is one of the major sources of bias error of satellite based precipitation estimation products and may even cause lots of problems during the bias reduction and calibration processes. In this paper, a hybrid statistical method is introduced to detect FA events of PERSIANN dataset over Urmia Lake basin in northwest of Iran. The main FA detection model is based on Bayesian theorem at which four predictor parameters including PERSIANN rainfall estimations, brightness temperature (Tb), precipitable water (PW) and near surface air temperature (Tair) is considered as its input dataset. In order to decrease the dimensions of input dataset by summarizing their most important modes of variability and correlations to the reference dataset, a technique named singular value decomposition (SVD) is used. The application of Bayesian-SVD method in FA detection of Urmia Lake basin resulted in a trade-off between FA detection and Hit events loss. The results show success of proposed method in detecting about 30% of FA events in return for loss of about 12% of Hit events while better capability of this method in cold seasons is observed.
Bayesian parameter estimation of core collapse supernovae using gravitational wave simulations
Edwards, Matthew C; Christensen, Nelson
2014-01-01
Using the latest numerical simulations of rotating stellar core collapse, we present a Bayesian framework to extract the physical information encoded in noisy gravitational wave signals. We fit Bayesian principal component regression models with known and unknown signal arrival times to reconstruct gravitational wave signals, and subsequently fit known astrophysical parameters on the posterior means of the principal component coefficients using a linear model. We predict the ratio of rotational kinetic energy to gravitational energy of the inner core at bounce by sampling from the posterior predictive distribution, and find that these predictions are generally very close to the true parameter values, with $90\\%$ credible intervals $\\sim 0.04$ and $\\sim 0.06$ wide for the known and unknown arrival time models respectively. Two supervised machine learning methods are implemented to classify precollapse differential rotation, and we find that these methods discriminate rapidly rotating progenitors particularly w...
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
DEFF Research Database (Denmark)
Troldborg, Mads; Nowak, W.; Tuxen, N.
2010-01-01
for each of the conceptual models considered. The probability distribution of mass discharge is obtained by combining all ensembles via BMA. The method was applied to a trichloroethylene-contaminated site located in northern Copenhagen. Four essentially different conceptual models based on two source zone......, it is important to quantify the associated uncertainties. Here a rigorous approach for quantifying the uncertainty in the mass discharge across a multilevel control plane is presented. The method accounts for (1) conceptual model uncertainty using multiple conceptual models and Bayesian model averaging (BMA), (2......) heterogeneity through Bayesian geostatistics with an uncertain geostatistical model, and (3) measurement uncertainty. Through unconditional and conditional Monte Carlo simulation, ensembles of steady state plume realizations are generated. The conditional ensembles honor all measured data at the control plane...
Recognition of Action as a Bayesian Parameter Estimation Problem over Time
DEFF Research Database (Denmark)
Krüger, Volker
2007-01-01
In this paper we will discuss two problems related to action recognition: The first problem is the one of identifying in a surveillance scenario whether a person is walking or running and in what rough direction. The second problem is concerned with the recovery of action primitives from observed...... complex actions. Both problems will be discussed within a statistical framework. Bayesian propagation over time offers a framework to treat likelihood observations at each time step and the dynamics between the time steps in a unified manner. The first problem will be approached as a patter recognition...... and tracking task by a Bayesian propagation of the likelihoods. The latter problem will be approached by explicitly specifying the dynamics while the likelihood measure will give a measure how good each dynamical model fit at each time step. Extensive experimental results show the applicability...
Directory of Open Access Journals (Sweden)
Moreira Paulo H. S.
2016-03-01
Full Text Available In this study the hydraulic and solute transport properties of an unsaturated soil were estimated simultaneously from a relatively simple small-scale laboratory column infiltration/outflow experiment. As governing equations we used the Richards equation for variably saturated flow and a physical non-equilibrium dual-porosity type formulation for solute transport. A Bayesian parameter estimation approach was used in which the unknown parameters were estimated with the Markov Chain Monte Carlo (MCMC method through implementation of the Metropolis-Hastings algorithm. Sensitivity coefficients were examined in order to determine the most meaningful measurements for identifying the unknown hydraulic and transport parameters. Results obtained using the measured pressure head and solute concentration data collected during the unsaturated soil column experiment revealed the robustness of the proposed approach.
Empirical vs Bayesian approach for estimating haplotypes from genotypes of unrelated individuals
Directory of Open Access Journals (Sweden)
Cheng Jacob
2007-01-01
Full Text Available Abstract Background The completion of the HapMap project has stimulated further development of haplotype-based methodologies for disease associations. A key aspect of such development is the statistical inference of individual diplotypes from unphased genotypes. Several methodologies for inferring haplotypes have been developed, but they have not been evaluated extensively to determine which method not only performs well, but also can be easily incorporated in downstream haplotype-based association analyses. In this paper, we attempt to do so. Our evaluation was carried out by comparing the two leading Bayesian methods, implemented in PHASE and HAPLOTYPER, and the two leading empirical methods, implemented in PL-EM and HPlus. We used these methods to analyze real data, namely the dense genotypes on X-chromosome of 30 European and 30 African trios provided by the International HapMap Project, and simulated genotype data. Our conclusions are based on these analyses. Results All programs performed very well on X-chromosome data, with an average similarity index of 0.99 and an average prediction rate of 0.99 for both European and African trios. On simulated data with approximation of coalescence, PHASE implementing the Bayesian method based on the coalescence approximation outperformed other programs on small sample sizes. When the sample size increased, other programs performed as well as PHASE. PL-EM and HPlus implementing empirical methods required much less running time than the programs implementing the Bayesian methods. They required only one hundredth or thousandth of the running time required by PHASE, particularly when analyzing large sample sizes and large umber of SNPs. Conclusion For large sample sizes (hundreds or more, which most association studies require, the two empirical methods might be used since they infer the haplotypes as accurately as any Bayesian methods and can be incorporated easily into downstream haplotype
Bayesian Nonparametric Estimation for Dynamic Treatment Regimes with Sequential Transition Times.
Xu, Yanxun; Müller, Peter; Wahed, Abdus S; Thall, Peter F
2016-01-01
We analyze a dataset arising from a clinical trial involving multi-stage chemotherapy regimes for acute leukemia. The trial design was a 2 × 2 factorial for frontline therapies only. Motivated by the idea that subsequent salvage treatments affect survival time, we model therapy as a dynamic treatment regime (DTR), that is, an alternating sequence of adaptive treatments or other actions and transition times between disease states. These sequences may vary substantially between patients, depending on how the regime plays out. To evaluate the regimes, mean overall survival time is expressed as a weighted average of the means of all possible sums of successive transitions times. We assume a Bayesian nonparametric survival regression model for each transition time, with a dependent Dirichlet process prior and Gaussian process base measure (DDP-GP). Posterior simulation is implemented by Markov chain Monte Carlo (MCMC) sampling. We provide general guidelines for constructing a prior using empirical Bayes methods. The proposed approach is compared with inverse probability of treatment weighting, including a doubly robust augmented version of this approach, for both single-stage and multi-stage regimes with treatment assignment depending on baseline covariates. The simulations show that the proposed nonparametric Bayesian approach can substantially improve inference compared to existing methods. An R program for implementing the DDP-GP-based Bayesian nonparametric analysis is freely available at https://www.ma.utexas.edu/users/yxu/.
Using of bayesian networks to estimate the probability of "NATECH" scenario occurrence
Dobes, Pavel; Dlabka, Jakub; Jelšovská, Katarína; Polorecká, Mária; Baudišová, Barbora; Danihelka, Pavel
2015-04-01
In the twentieth century, implementation of Bayesian statistics and probability was not much used (may be it wasn't a preferred approach) in the area of natural and industrial risk analysis and management. Neither it was used within analysis of so called NATECH accidents (chemical accidents triggered by natural events, such as e.g. earthquakes, floods, lightning etc.; ref. E. Krausmann, 2011, doi:10.5194/nhess-11-921-2011). Main role, from the beginning, played here so called "classical" frequentist probability (ref. Neyman, 1937), which rely up to now especially on the right/false results of experiments and monitoring and didn't enable to count on expert's beliefs, expectations and judgements (which is, on the other hand, one of the once again well known pillars of Bayessian approach to probability). In the last 20 or 30 years, there is possible to observe, through publications and conferences, the Renaissance of Baysssian statistics into many scientific disciplines (also into various branches of geosciences). The necessity of a certain level of trust in expert judgment within risk analysis is back? After several decades of development on this field, it could be proposed following hypothesis (to be checked): "We couldn't estimate probabilities of complex crisis situations and their TOP events (many NATECH events could be classified as crisis situations or emergencies), only by classical frequentist approach, but also by using of Bayessian approach (i.e. with help of prestaged Bayessian Network including expert belief and expectation as well as classical frequentist inputs). Because - there is not always enough quantitative information from monitoring of historical emergencies, there could be several dependant or independant variables necessary to consider and in generally - every emergency situation always have a little different run." In this topic, team of authors presents its proposal of prestaged typized Bayessian network model for specified NATECH scenario
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun
2017-08-01
Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM2.5 is a promising way to fill the areas that are not covered by ground PM2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R(2) = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM2.5 estimates.
Pidlisecky, A.; Haines, S.S.
2011-01-01
Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.
Bayesian Ability Estimation via 3PL (Three-Parameter Logistic) with Partially Known Item Parameters
1988-08-31
TOE 14& .TQ r , o ORT (Year, Month, Day) 15. PAE COUNT :eFROM anL__ T 0 &A~ ’-1~~Jg 34 16 SUPPLEMENTARY NOTATION 17 COSATI CODES I 18. SUBJECT TERMS...jp(Ojz, )p( Jz)d, (11) r ’,4 where, from the conditional independence if x and y given C, p(CIz) = p(xI )p( jY)/p(x1Y) (12) and p(Olz,) = p (xl)¢(9)/p(x...Approximate Bayesian methods Trabajos Estadistica 31, 223-237. Lord, F.M. (1980). Applications of Item Response Theory to Practical Testing Problems
Bayesian estimate of the degree of a polynomial given a noisy data sample
Mana, Giovanni; Lago, Simona
2013-01-01
A widely used method to create a continuous representation of a discrete data-set is regression analysis. When the regression model is not based on a mathematical description of the physics underlying the data, heuristic techniques play a crucial role and the model choice can have a significant impact on the result. In this paper, the problem of identifying the most appropriate model is formulated and solved in terms of Bayesian selection. Besides, probability calculus is the best way to choose among different alternatives. The results obtained are applied to the case of both univariate and bivariate polynomials used as trial solutions of systems of thermodynamic partial differential equations.
A Simulation-Based Study on Bayesian Estimators for the Skew Brownian Motion
Directory of Open Access Journals (Sweden)
Manuel Barahona
2016-06-01
Full Text Available In analyzing a temporal data set from a continuous variable, diffusion processes can be suitable under certain conditions, depending on the distribution of increments. We are interested in processes where a semi-permeable barrier splits the state space, producing a skewed diffusion that can have different rates on each side. In this work, the asymptotic behavior of some Bayesian inferences for this class of processes is discussed and validated through simulations. As an application, we model the location of South American sea lions (Otaria flavescens on the coast of Calbuco, southern Chile, which can be used to understand how the foraging behavior of apex predators varies temporally and spatially.
Bayesian sample size calculation for estimation of the difference between two binomial proportions.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Marriott, Paul; Gittins, John
2013-12-01
In this study, we discuss a decision theoretic or fully Bayesian approach to the sample size question in clinical trials with binary responses. Data are assumed to come from two binomial distributions. A Dirichlet distribution is assumed to describe prior knowledge of the two success probabilities p1 and p2. The parameter of interest is p = p1 - p2. The optimal size of the trial is obtained by maximising the expected net benefit function. The methodology presented in this article extends previous work by the assumption of dependent prior distributions for p1 and p2.
Low Complexity Sparse Bayesian Learning for Channel Estimation Using Generalized Mean Field
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Fleury, Bernard Henri
2014-01-01
constrain the auxiliary function approximating the posterior probability density function of the unknown variables to factorize over disjoint groups of contiguous entries in the sparse vector - the size of these groups dictates the degree of complexity reduction. The original high-complexity algorithms......We derive low complexity versions of a wide range of algorithms for sparse Bayesian learning (SBL) in underdetermined linear systems. The proposed algorithms are obtained by applying the generalized mean field (GMF) inference framework to a generic SBL probabilistic model. In the GMF framework, we...
Anderson, Christian C; Bauer, Adam Q; Holland, Mark R; Pakula, Michal; Laugier, Pascal; Bretthorst, G Larry; Miller, James G
2010-11-01
Quantitative ultrasonic characterization of cancellous bone can be complicated by artifacts introduced by analyzing acquired data consisting of two propagating waves (a fast wave and a slow wave) as if only one wave were present. Recovering the ultrasonic properties of overlapping fast and slow waves could therefore lead to enhancement of bone quality assessment. The current study uses Bayesian probability theory to estimate phase velocity and normalized broadband ultrasonic attenuation (nBUA) parameters in a model of fast and slow wave propagation. Calculations are carried out using Markov chain Monte Carlo with simulated annealing to approximate the marginal posterior probability densities for parameters in the model. The technique is applied to simulated data, to data acquired on two phantoms capable of generating two waves in acquired signals, and to data acquired on a human femur condyle specimen. The models are in good agreement with both the simulated and experimental data, and the values of the estimated ultrasonic parameters fall within expected ranges.
Eadie, Gwendolyn; Harris, William; Widrow, Lawrence; Springford, Aaron
2016-08-01
The mass and cumulative mass profile of the Galaxy are its most fundamental properties. Estimating these properties, however, is not a trivial problem. We rely on the kinematic information from Galactic satellites such as globular clusters and dwarf galaxies, and this data is incomplete and subject to measurement uncertainty. In particular, the complete 3D velocity vectors of objects are sometimes unavailable, and there may be selection biases due to both the distribution of objects around the Galaxy and our measurement position. On the other hand, the uncertainties of these data are fairly well understood. Thus, we would like to incorporate these uncertainties and the incomplete data into our estimate of the Milky Way's mass. The Bayesian paradigm offers a way to deal with both the missing kinematic data and measurement errors using a hierarchical model. An application of this method to the Milky Way halo mass profile, using the kinematic data for globular clusters and dwarf satellites, is shown.
Waller, Niels G; Feuerstahler, Leah
2017-03-17
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Directory of Open Access Journals (Sweden)
Hacene MELLAH
2016-07-01
Full Text Available The objective of this paper is to develop an Artificial Neural Network (ANN model to estimate simultaneously, parameters and state of a brushed DC machine. The proposed ANN estimator is novel in the sense that his estimates simultaneously temperature, speed and rotor resistance based only on the measurement of the voltage and current inputs. Many types of ANN estimators have been designed by a lot of researchers during the last two decades. Each type is designed for a specific application. The thermal behavior of the motor is very slow, which leads to large amounts of data sets. The standard ANN use often Multi-Layer Perceptron (MLP with Levenberg-Marquardt Backpropagation (LMBP, among the limits of LMBP in the case of large number of data, so the use of MLP based on LMBP is no longer valid in our case. As solution, we propose the use of Cascade-Forward Neural Network (CFNN based Bayesian Regulation backpropagation (BRBP. To test our estimator robustness a random white-Gaussian noise has been added to the sets. The proposed estimator is in our viewpoint accurate and robust.
Lyons, James E.; Kendall, William; Royle, J. Andrew; Converse, Sarah J.; Andres, Brad A.; Buchanan, Joseph B.
2016-01-01
We present a novel formulation of a mark–recapture–resight model that allows estimation of population size, stopover duration, and arrival and departure schedules at migration areas. Estimation is based on encounter histories of uniquely marked individuals and relative counts of marked and unmarked animals. We use a Bayesian analysis of a state–space formulation of the Jolly–Seber mark–recapture model, integrated with a binomial model for counts of unmarked animals, to derive estimates of population size and arrival and departure probabilities. We also provide a novel estimator for stopover duration that is derived from the latent state variable representing the interim between arrival and departure in the state–space model. We conduct a simulation study of field sampling protocols to understand the impact of superpopulation size, proportion marked, and number of animals sampled on bias and precision of estimates. Simulation results indicate that relative bias of estimates of the proportion of the population with marks was low for all sampling scenarios and never exceeded 2%. Our approach does not require enumeration of all unmarked animals detected or direct knowledge of the number of marked animals in the population at the time of the study. This provides flexibility and potential application in a variety of sampling situations (e.g., migratory birds, breeding seabirds, sea turtles, fish, pinnipeds, etc.). Application of the methods is demonstrated with data from a study of migratory sandpipers.
Using robust Bayesian network to estimate the residuals of fluoroquinolone antibiotic in soil.
Li, Xuewen; Xie, Yunfeng; Li, Lianfa; Yang, Xunfeng; Wang, Ning; Wang, Jinfeng
2015-11-01
Prediction of antibiotic pollution and its consequences is difficult, due to the uncertainties and complexities associated with multiple related factors. This article employed domain knowledge and spatial data to construct a Bayesian network (BN) model to assess fluoroquinolone antibiotic (FQs) pollution in the soil of an intensive vegetable cultivation area. The results show: (1) The relationships between FQs pollution and contributory factors: Three factors (cultivation methods, crop rotations, and chicken manure types) were consistently identified as predictors in the topological structures of three FQs, indicating their importance in FQs pollution; deduced with domain knowledge, the cultivation methods are determined by the crop rotations, which require different nutrients (derived from the manure) according to different plant biomass. (2) The performance of BN model: The integrative robust Bayesian network model achieved the highest detection probability (pd) of high-risk and receiver operating characteristic (ROC) area, since it incorporates domain knowledge and model uncertainty. Our encouraging findings have implications for the use of BN as a robust approach to assessment of FQs pollution and for informing decisions on appropriate remedial measures.
Kosmala, Margaret; Miller, Philip; Ferreira, Sam; Funston, Paul; Keet, Dewald; Packer, Craig
2016-01-01
Emerging infectious diseases of wildlife are of increasing concern to managers and conservation policy makers, but are often difficult to study and predict due to the complexity of host-disease systems and a paucity of empirical data. We demonstrate the use of an Approximate Bayesian Computation statistical framework to reconstruct the disease dynamics of bovine tuberculosis in Kruger National Park's lion population, despite limited empirical data on the disease's effects in lions. The modeling results suggest that, while a large proportion of the lion population will become infected with bovine tuberculosis, lions are a spillover host and long disease latency is common. In the absence of future aggravating factors, bovine tuberculosis is projected to cause a lion population decline of ~3% over the next 50 years, with the population stabilizing at this new equilibrium. The Approximate Bayesian Computation framework is a new tool for wildlife managers. It allows emerging infectious diseases to be modeled in complex systems by incorporating disparate knowledge about host demographics, behavior, and heterogeneous disease transmission, while allowing inference of unknown system parameters.
Directory of Open Access Journals (Sweden)
Laura Gosoniu
Full Text Available A national HIV/AIDS and malaria parasitological survey was carried out in Tanzania in 2007-2008. In this study the parasitological data were analyzed: i to identify climatic/environmental, socio-economic and interventions factors associated with child malaria risk and ii to produce a contemporary, high spatial resolution parasitaemia risk map of the country. Bayesian geostatistical models were fitted to assess the association between parasitaemia risk and its determinants. bayesian kriging was employed to predict malaria risk at unsampled locations across Tanzania and to obtain the uncertainty associated with the predictions. Markov chain Monte Carlo (MCMC simulation methods were employed for model fit and prediction. Parasitaemia risk estimates were linked to population data and the number of infected children at province level was calculated. Model validation indicated a high predictive ability of the geostatistical model, with 60.00% of the test locations within the 95% credible interval. The results indicate that older children are significantly more likely to test positive for malaria compared with younger children and living in urban areas and better-off households reduces the risk of infection. However, none of the environmental and climatic proxies or the intervention measures were significantly associated with the risk of parasitaemia. Low levels of malaria prevalence were estimated for Zanzibar island. The population-adjusted prevalence ranges from 0.29% in Kaskazini province (Zanzibar island to 18.65% in Mtwara region. The pattern of predicted malaria risk is similar with the previous maps based on historical data, although the estimates are lower. The predicted maps could be used by decision-makers to allocate resources and target interventions in the regions with highest burden of malaria in order to reduce the disease transmission in the country.
Ling, Daphne I; Pai, Madhukar; Schiller, Ian; Dendukuri, Nandini
2014-05-15
The absence of a gold standard, i.e., a diagnostic reference standard having perfect sensitivity and specificity, is a common problem in clinical practice and in diagnostic research studies. There is a need for methods to estimate the incremental value of a new, imperfect test in this context. We use a Bayesian approach to estimate the probability of the unknown disease status via a latent class model and extend two commonly-used measures of incremental value based on predictive values [difference in the area under the ROC curve (AUC) and integrated discrimination improvement (IDI)] to the context where no gold standard exists. The methods are illustrated using simulated data and applied to the problem of estimating the incremental value of a novel interferon-gamma release assay (IGRA) over the tuberculin skin test (TST) for latent tuberculosis (TB) screening. We also show how to estimate the incremental value of IGRAs when decisions are based on observed test results rather than predictive values. We showed that the incremental value is greatest when both sensitivity and specificity of the new test are better and that conditional dependence between the tests reduces the incremental value. The incremental value of the IGRA depends on the sensitivity and specificity of the TST, as well as the prevalence of latent TB, and may thus vary in different populations. Even in the absence of a gold standard, incremental value statistics may be estimated and can aid decisions about the practical value of a new diagnostic test.
Rodhouse, T.J.; Irvine, K.M.; Vierling, K.T.; Vierling, L.A.
2011-01-01
Monitoring programs that evaluate restoration and inform adaptive management are important for addressing environmental degradation. These efforts may be well served by spatially explicit hierarchical approaches to modeling because of unavoidable spatial structure inherited from past land use patterns and other factors. We developed Bayesian hierarchical models to estimate trends from annual density counts observed in a spatially structured wetland forb (Camassia quamash [camas]) population following the cessation of grazing and mowing on the study area, and in a separate reference population of camas. The restoration site was bisected by roads and drainage ditches, resulting in distinct subpopulations ("zones") with different land use histories. We modeled this spatial structure by fitting zone-specific intercepts and slopes. We allowed spatial covariance parameters in the model to vary by zone, as in stratified kriging, accommodating anisotropy and improving computation and biological interpretation. Trend estimates provided evidence of a positive effect of passive restoration, and the strength of evidence was influenced by the amount of spatial structure in the model. Allowing trends to vary among zones and accounting for topographic heterogeneity increased precision of trend estimates. Accounting for spatial autocorrelation shifted parameter coefficients in ways that varied among zones depending on strength of statistical shrinkage, autocorrelation and topographic heterogeneity-a phenomenon not widely described. Spatially explicit estimates of trend from hierarchical models will generally be more useful to land managers than pooled regional estimates and provide more realistic assessments of uncertainty. The ability to grapple with historical contingency is an appealing benefit of this approach.
DEFF Research Database (Denmark)
Ziegel, Johanna; Nyengaard, Jens Randel; Jensen, Eva B. Vedel
In the present paper, statistical procedures for estimating shape and orientation of arbitrary three-dimensional particles are developed. The focus of this work is on the case where the particles cannot be observed directly, but only via sections. Volume tensors are used for describing particle s...
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Converse, Sarah J.; Royle, J. Andrew; Urbanek, Richard P.
2012-01-01
Inbreeding depression is frequently a concern of managers interested in restoring endangered species. Decisions to reduce the potential for inbreeding depression by balancing genotypic contributions to reintroduced populations may exact a cost on long-term demographic performance of the population if those decisions result in reduced numbers of animals released and/or restriction of particularly successful genotypes (i.e., heritable traits of particular family lines). As part of an effort to restore a migratory flock of Whooping Cranes (Grus americana) to eastern North America using the offspring of captive breeders, we obtained a unique dataset which includes post-release mark-recapture data, as well as the pedigree of each released individual. We developed a Bayesian formulation of a multi-state model to analyze radio-telemetry, band-resight, and dead recovery data on reintroduced individuals, in order to track survival and breeding state transitions. We used studbook-based individual covariates to examine the comparative evidence for and degree of effects of inbreeding, genotype, and genotype quality on post-release survival of reintroduced individuals. We demonstrate implementation of the Bayesian multi-state model, which allows for the integration of imperfect detection, multiple data types, random effects, and individual- and time-dependent covariates. Our results provide only weak evidence for an effect of the quality of an individual's genotype in captivity on post-release survival as well as for an effect of inbreeding on post-release survival. We plan to integrate our results into a decision-analytic modeling framework that can explicitly examine tradeoffs between the effects of inbreeding and the effects of genotype and demographic stochasticity on population establishment.
Bayesian estimation of Karhunen-Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-08-01
One of the most widely-used procedures for dimensionality reduction of high dimensional data is Principal Component Analysis (PCA). More broadly, low-dimensional stochastic representation of random fields with finite variance is provided via the well known Karhunen-Loève expansion (KLE). The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L2 sense, i.e., which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition) on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build probabilistic Karhunen-Loève expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.
Etingof, Pavel; Nikshych, Dmitri; Ostrik, Victor
2015-01-01
Is there a vector space whose dimension is the golden ratio? Of course not-the golden ratio is not an integer! But this can happen for generalizations of vector spaces-objects of a tensor category. The theory of tensor categories is a relatively new field of mathematics that generalizes the theory of group representations. It has deep connections with many other fields, including representation theory, Hopf algebras, operator algebras, low-dimensional topology (in particular, knot theory), homotopy theory, quantum mechanics and field theory, quantum computation, theory of motives, etc. This bo
Directory of Open Access Journals (Sweden)
Mohamed Khalaf-Allah
2008-01-01
Full Text Available The mobile terminal positioning problem is categorized into three different types according to the availability of (1 initial accurate location information and (2 motion measurement data.Location estimation refers to the mobile positioning problem when both the initial location and motion measurement data are not available. If both are available, the positioning problem is referred to as position tracking. When only motion measurements are available, the problem is known as global localization. These positioning problems were solved within the Bayesian filtering framework. Filter derivation and implementation algorithms are provided with emphasis on the mapping approach. The radio maps of the experimental area have been created by a 3D deterministic radio propagation tool with a grid resolution of 5Ã¢Â€Â‰m. Real-world experimentation was conducted in a GSM network deployed in a semiurban environment in order to investigate the performance of the different positioning algorithms.
Vďačný, Peter
2015-08-01
The class Litostomatea comprises a diverse assemblage of free-living and endosymbiotic ciliates. To understand diversification dynamic of litostomateans, divergence times of their main groups were estimated with the Bayesian molecular dating, a technique allowing relaxation of molecular clock and incorporation of flexible calibration points. The class Litostomatea very likely emerged during the Cryogenian around 680 Mya. The origin of the subclass Rhynchostomatia is dated to about 415 Mya, while that of the subclass Haptoria to about 654 Mya. The order Pleurostomatida, emerging about 556 Mya, was recognized as the oldest group within the subclass Haptoria. The order Spathidiida appeared in the Paleozoic about 442 Mya. The three remaining haptorian orders evolved in the Paleozoic/Mesozoic periods: Didiniida about 419 Mya, Lacrymariida about 269 Mya, and Haptorida about 194 Mya. The subclass Trichostomatia originated from a spathidiid ancestor in the Mesozoic about 260 Mya. A further goal of this study was to investigate the impact of various settings on posterior divergence time estimates. The root placement and tree topology as well as the priors of the rate-drift model, birth-death process and nucleotide substitution rate, had no significant effect on calculation of posterior divergence time estimates. However, removal of calibration points could significantly change time estimates at some nodes.
Energy Technology Data Exchange (ETDEWEB)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Grana, Dario [Department of Geology and Geophysics, University of Wyoming, Laramie (United States); Santos, Marcio; Figueiredo, Wagner [Physics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Roisenberg, Mauro [Informatic and Statistics Department, Federal University of Santa Catarina, Florianópolis (Brazil); Schwedersky Neto, Guenther [Petrobras Research Center, Rio de Janeiro (Brazil)
2017-05-01
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well data multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.
Jakkareddy, Pradeep S.; Balaji, C.
2016-09-01
This paper employs the Bayesian based Metropolis Hasting - Markov Chain Monte Carlo algorithm to solve inverse heat transfer problem of determining the spatially varying heat transfer coefficient from a flat plate with flush mounted discrete heat sources with measured temperatures at the bottom of the plate. The Nusselt number is assumed to be of the form Nu = aReb(x/l)c . To input reasonable values of ’a’ and ‘b’ into the inverse problem, first limited two dimensional conjugate convection simulations were done with Comsol. Based on the guidance from this different values of ‘a’ and ‘b’ are input to a computationally less complex problem of conjugate conduction in the flat plate (15mm thickness) and temperature distributions at the bottom of the plate which is a more convenient location for measuring the temperatures without disturbing the flow were obtained. Since the goal of this work is to demonstrate the eficiacy of the Bayesian approach to accurately retrieve ‘a’ and ‘b’, numerically generated temperatures with known values of ‘a’ and ‘b’ are treated as ‘surrogate’ experimental data. The inverse problem is then solved by repeatedly using the forward solutions together with the MH-MCMC aprroach. To speed up the estimation, the forward model is replaced by an artificial neural network. The mean, maximum-a-posteriori and standard deviation of the estimated parameters ‘a’ and ‘b’ are reported. The robustness of the proposed method is examined, by synthetically adding noise to the temperatures.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Institute of Scientific and Technical Information of China (English)
Konstantinos ANGELIS; Mario DOS REIS
2015-01-01
Although the effects of the coalescent process on sequence divergence and genealogies are well understood, the vir-tual majority of studies that use molecular sequences to estimate times of divergence among species have failed to account for the coalescent process. Here we study the impact of ancestral population size and incomplete lineage sorting on Bayesian estimates of species divergence times under the molecular clock when the inference model ignores the coalescent process. Using a combi-nation of mathematical analysis, computer simulations and analysis of real data, we find that the errors on estimates of times and the molecular rate can be substantial when ancestral populations are large and when there is substantial incomplete lineage sort-ing. For example, in a simple three-species case, we find that if the most precise fossil calibration is placed on the root of the phylogeny, the age of the internal node is overestimated, while if the most precise calibration is placed on the internal node, then the age of the root is underestimated. In both cases, the molecular rate is overestimated. Using simulations on a phylogeny of nine species, we show that substantial errors in time and rate estimates can be obtained even when dating ancient divergence events. We analyse the hominoid phylogeny and show that estimates of the neutral mutation rate obtained while ignoring the coalescent are too high. Using a coalescent-based technique to obtain geological times of divergence, we obtain estimates of the mutation rate that are within experimental estimates and we also obtain substantially older divergence times within the phylogeny [Current Zoology 61 (5): 874–885, 2015].
A fully Bayesian approach to the parcel-based detection-estimation of brain activity in fMRI
Energy Technology Data Exchange (ETDEWEB)
Makni, S. [Univ Oxford, John Radcliffe Hosp, Oxford Ctr Funct Magnet Resonance Imaging Brain, Oxford OX3 9DU (United Kingdom); Idier, J. [IRCCyN CNRS, Nantes (France); Vincent, T.; Ciuciu, P. [CEA, NeuroSpin, Gif Sur Yvette (France); Vincent, T.; Dehaene-Lambertz, G.; Ciuciu, P. [Inst Imagerie Neurofonctionnelle, IFR 49, Paris (France); Thirion, B. [INRIA Futurs, Orsay (France); Dehaene-Lambertz, G. [INSERM, NeuroSpin, U562, Gif Sur Yvette (France)
2008-07-01
Within-subject analysis in fMRI essentially addresses two problems, i. e., the detection of activated brain regions in response to an experimental task and the estimation of the underlying dynamics, also known as the characterisation of Hemodynamic response function (HRF). So far, both issues have been treated sequentially while it is known that the HRF model has a dramatic impact on the localisation of activations and that the HRF shape may vary from one region to another. In this paper, we conciliate both issues in a region-based joint detection-estimation framework that we develop in the Bayesian formalism. Instead of considering function basis to account for spatial variability, spatially adaptive General Linear Models are built upon region-based non-parametric estimation of brain dynamics. Regions are first identified as functionally homogeneous parcels in the mask of the grey matter using a specific procedure [Thirion, B., Flandin, G., Pinel, P., Roche, A., Ciuciu, P., Poline, J.B., August 2006. Dealing with the shortcomings of spatial normalization: Multi-subject parcellation of fMRI datasets. Hum. Brain Mapp. 27 (8), 678-693.]. Then, in each parcel, prior information is embedded to constrain this estimation. Detection is achieved by modelling activating, deactivating and non-activating voxels through mixture models within each parcel. From the posterior distribution, we infer upon the model parameters using Markov Chain Monte Carlo (MCMC) techniques. Bayesian model comparison allows us to emphasize on artificial datasets first that inhomogeneous gamma-Gaussian mixture models outperform Gaussian mixtures in terms of sensitivity/specificity trade-off and second that it is worthwhile modelling serial correlation through an AR(1) noise process at low signal-to-noise (SNR) ratio. Our approach is then validated on an fMRI experiment that studies habituation to auditory sentence repetition. This phenomenon is clearly recovered as well as the hierarchical temporal
Odbert, Henry; Hincks, Thea; Aspinall, Willy
2015-04-01
Volcanic hazard assessments must combine information about the physical processes of hazardous phenomena with observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We show how our method
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
Directory of Open Access Journals (Sweden)
A.Tuti Rumiati
2012-02-01
Full Text Available This paper discusses Bayesian Method of Small Area Estimation (SAE based on Binomial response variable. SAE method being developed to estimate parameter in small area due to insufficiency of sample. The case study is literacy rate estimation at sub-district level in Sumenep district, East Java Province. Literacy rate is measured by proportion of people who are able to read and write, from the population of 10 year-old or more. In the case study we used Social Economic Survey (Susenasdata collected by BPS. The SAE approach was applied since the Susenas data is not representative enough to estimate the parameters at sub-district level because it’s designed to estimate parameters in regional area (in scope of a district/city at minimum. In this research, the response variable being used was logit function trasformation of pi (the parameter of Binomial distribution. We applied direct and indirect approach for parameter estimation, both using Empirical Bayes approach. For direct estimation we used prior distribution of Beta distribution and Normal prior distribution for logit function (pi and to estimate parameter by using numerical method, i.e integration Monte Carlo. For indirect approach, we used auxiliary variables which are combinations of sex and age (which is divided into five categories. Penalized Quasi Likelihood (PQL was used to get parameter estimation of SAE model and Restricted Maximum Likelihood method (REML for MSE estimation. Instead of Bayesian approach, we are also conducting direct estimation using classical approach in order to evaluate the quality of the estimators. This research gives some findings, those are: Bayesian approach for SAE model gives the best estimation because having the lowest MSE value compares to the other methods. For the direct estimation, Bayesian approach using Beta and logit Normal prior distribution give a very similar result to the direct estimation with classical approach since the weight of is too
Simmonds, E.J.; Portilla, E.; Skagen, D.; Beare, D.J.; Reid, D.G.
2010-01-01
Bayesian Markov chain Monte Carlo methods are ideally suited to analyses of situations where there are a variety of data sources, particularly where the uncertainties differ markedly among the data and the estimated parameters can be correlated. The example of Northeast Atlantic (NEA) mackerel is us
Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L
2015-08-18
Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.
Inverse Bayesian Estimation of Gravitational Mass Density in Galaxies from Missing Kinematic Data
Chakrabarty, Dalia
2014-01-01
In this paper we focus on a type of inverse problem in which the data is expressed as an unknown function of the sought and unknown model function (or its discretised representation as a model parameter vector). In particular, we deal with situations in which training data is not available. Then we cannot model the unknown functional relationship between data and the unknown model function (or parameter vector) with a Gaussian Process of appropriate dimensionality. A Bayesian method based on state space modelling is advanced instead. Within this framework, the likelihood is expressed in terms of the probability density function ($pdf$) of the state space variable and the sought model parameter vector is embedded within the domain of this $pdf$. As the measurable vector lives only inside an identified sub-volume of the system state space, the $pdf$ of the state space variable is projected onto the space of the measurables, and it is in terms of the projected state space density that the likelihood is written; ...
O'Hare, A; Orton, R J; Bessell, P R; Kao, R R
2014-05-22
Fitting models with Bayesian likelihood-based parameter inference is becoming increasingly important in infectious disease epidemiology. Detailed datasets present the opportunity to identify subsets of these data that capture important characteristics of the underlying epidemiology. One such dataset describes the epidemic of bovine tuberculosis (bTB) in British cattle, which is also an important exemplar of a disease with a wildlife reservoir (the Eurasian badger). Here, we evaluate a set of nested dynamic models of bTB transmission, including individual- and herd-level transmission heterogeneity and assuming minimal prior knowledge of the transmission and diagnostic test parameters. We performed a likelihood-based bootstrapping operation on the model to infer parameters based only on the recorded numbers of cattle testing positive for bTB at the start of each herd outbreak considering high- and low-risk areas separately. Models without herd heterogeneity are preferred in both areas though there is some evidence for super-spreading cattle. Similar to previous studies, we found low test sensitivities and high within-herd basic reproduction numbers (R0), suggesting that there may be many unobserved infections in cattle, even though the current testing regime is sufficient to control within-herd epidemics in most cases. Compared with other, more data-heavy approaches, the summary data used in our approach are easily collected, making our approach attractive for other systems.
Directory of Open Access Journals (Sweden)
Feng Feng
Full Text Available Surface plasmon resonance (SPR has previously been employed to measure the active concentration of analyte in addition to the kinetic rate constants in molecular binding reactions. Those approaches, however, have a few restrictions. In this work, a Bayesian approach is developed to determine both active concentration and affinity constants using SPR technology. With the appropriate prior probabilities on the parameters and a derived likelihood function, a Markov Chain Monte Carlo (MCMC algorithm is applied to compute the posterior probability densities of both the active concentration and kinetic rate constants based on the collected SPR data. Compared with previous approaches, ours exploits information from the duration of the process in its entirety, including both association and dissociation phases, under partial mass transport conditions; do not depend on calibration data; multiple injections of analyte at varying flow rates are not necessary. Finally the method is validated by analyzing both simulated and experimental datasets. A software package implementing our approach is developed with a user-friendly interface and made freely available.
A Bayesian parameter estimation approach to pulsar time-of-arrival analysis
Messenger, C; Demorest, P; Ransom, S
2011-01-01
The increasing sensitivities of pulsar timing arrays to ultra-low frequency (nHz) gravitational waves promises to achieve direct gravitational wave detection within the next 5-10 years. While there are many parallel efforts being made in the improvement of telescope sensitivity, the detection of stable millisecond pulsars and the improvement of the timing software, there are reasons to believe that the methods used to accurately determine the time-of-arrival (TOA) of pulses from radio pulsars can be improved upon. More specifically, the determination of the uncertainties on these TOAs, which strongly affect the ability to detect GWs through pulsar timing, may be unreliable. We propose two Bayesian methods for the generation of pulsar TOAs starting from pulsar "search-mode" data and pre-folded data. These methods are applied to simulated toy-model examples and in this initial work we focus on the issue of uncertainties in the folding period. The final results of our analysis are expressed in the form of poster...
Dikaios, Nikolaos; Atkinson, David; Tudisca, Chiara; Purpura, Pierpaolo; Forster, Martin; Ahmed, Hashim; Beale, Timothy; Emberton, Mark; Punwani, Shonit
2017-03-01
The aim of this work is to compare Bayesian Inference for nonlinear models with commonly used traditional non-linear regression (NR) algorithms for estimating tracer kinetics in Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI). The algorithms are compared in terms of accuracy, and reproducibility under different initialization settings. Further it is investigated how a more robust estimation of tracer kinetics affects cancer diagnosis. The derived tracer kinetics from the Bayesian algorithm were validated against traditional NR algorithms (i.e. Levenberg-Marquardt, simplex) in terms of accuracy on a digital DCE phantom and in terms of goodness-of-fit (Kolmogorov-Smirnov test) on ROI-based concentration time courses from two different patient cohorts. The first cohort consisted of 76 men, 20 of whom had significant peripheral zone prostate cancer (any cancer-core-length (CCL) with Gleason>3+3 or any-grade with CCL>=4mm) following transperineal template prostate mapping biopsy. The second cohort consisted of 9 healthy volunteers and 24 patients with head and neck squamous cell carcinoma. The diagnostic ability of the derived tracer kinetics was assessed with receiver operating characteristic area under curve (ROC AUC) analysis. The Bayesian algorithm accurately recovered the ground-truth tracer kinetics for the digital DCE phantom consistently improving the Structural Similarity Index (SSIM) across the 50 different initializations compared to NR. For optimized initialization, Bayesian did not improve significantly the fitting accuracy on both patient cohorts, and it only significantly improved the ve ROC AUC on the HN population from ROC AUC=0.56 for the simplex to ROC AUC=0.76. For both cohorts, the values and the diagnostic ability of tracer kinetic parameters estimated with the Bayesian algorithm weren't affected by their initialization. To conclude, the Bayesian algorithm led to a more accurate and reproducible quantification of tracer kinetic
Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H
2016-08-01
The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012.
Steventon, Adam; Roberts, Adam
2015-12-01
We estimated lifetime costs of publicly funded social care, covering services such as residential and nursing care homes, domiciliary care and meals. Like previous studies, we constructed microsimulation models. However, our transition probabilities were estimated from longitudinal, linked administrative health and social care datasets, rather than from survey data. Administrative data were obtained from three geographical areas of England, and we estimated transition probabilities in each of these sites flexibly using Bayesian methods. This allowed us to quantify regional variation as well as the impact of structural and parameter uncertainty regarding the transition probabilities. Expected lifetime costs at age 65 were £20,200-27,000 for men and £38,700-49,000 for women, depending on which of the three areas was used to calibrate the model. Thus, patterns of social care spending differed markedly between areas, with mean costs varying by almost £10,000 (25%) across the lifetime for people of the same age and gender. Allowing for structural and parameter uncertainty had little impact on expected lifetime costs, but slightly increased the risk of very high costs, which will have implications for insurance products for social care through increasing requirements for capital reserves.
Licquia, Timothy C
2014-01-01
We present improved estimates of several global properties of the Milky Way, including its current star formation rate (SFR), the stellar mass contained in its disk and bulge+bar components, as well as its total stellar mass. We do so by combining previous measurements from the literature using a hierarchical Bayesian (HB) statistical method that allows us to account for the possibility that any value may be incorrect or have underestimated errors. We show that this method is robust to a wide variety of assumptions about the nature of problems in individual measurements or error estimates. Ultimately, our analysis yields a SFR for the Galaxy of $\\dot{\\rm M}_\\star=1.65\\pm0.19$ ${\\rm M}_\\odot$ yr$^{-1}$. By combining HB methods with Monte Carlo simulations that incorporate the latest estimates of the Galactocentric radius of the Sun, $R_0$, the exponential scale-length of the disk, $L_d$, and the local surface density of stellar mass, $\\Sigma_\\star(R_0)$, we show that the mass of the Galactic bulge+bar is ${\\rm...
Roels, Joris; Aelterman, Jan; De Vylder, Jonas; Hiep Luong; Saeys, Yvan; Philips, Wilfried
2016-08-01
Microscopy is one of the most essential imaging techniques in life sciences. High-quality images are required in order to solve (potentially life-saving) biomedical research problems. Many microscopy techniques do not achieve sufficient resolution for these purposes, being limited by physical diffraction and hardware deficiencies. Electron microscopy addresses optical diffraction by measuring emitted or transmitted electrons instead of photons, yielding nanometer resolution. Despite pushing back the diffraction limit, blur should still be taken into account because of practical hardware imperfections and remaining electron diffraction. Deconvolution algorithms can remove some of the blur in post-processing but they depend on knowledge of the point-spread function (PSF) and should accurately regularize noise. Any errors in the estimated PSF or noise model will reduce their effectiveness. This paper proposes a new procedure to estimate the lateral component of the point spread function of a 3D scanning electron microscope more accurately. We also propose a Bayesian maximum a posteriori deconvolution algorithm with a non-local image prior which employs this PSF estimate and previously developed noise statistics. We demonstrate visual quality improvements and show that applying our method improves the quality of subsequent segmentation steps.
Directory of Open Access Journals (Sweden)
Ali Asghar Pourhaji Kazem
2013-04-01
Full Text Available Computational Grids have developed as a new approach to solve large scale problems in scientific, engineering and business areas. Open Grid services architecture is an adaptation of the service-oriented architecture which presents the Grid operation a set of service-oriented softwares. Grid services composition provides the possibility for users to submit their complex requirements as a single request.QoS-aware Grid service composition algorithms try to construct a composite Grid service which satisfies the user-defined constraints as well as to optimize the QoS parameters. All of the presented service composition approaches in the literature discard using the Grid services with unknown QoS values in the composition process. However, estimation of unknown QoS values of Grid services provides an opportunity for them to be selected as component Grid services in the constructed composite service. In this paper, a probabilistic QoS model is presented for estimating the unknown QoS values using Bayesian network. Experimental results indicate that estimating the unknown QoS values has high accuracy and leads to more efficient composite Grid services.
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Multi-Pitch Estimation and Tracking Using Bayesian Inference in Block Sparsity
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jakobsson, Andreas; Jensen, Jesper Rindom;
2015-01-01
In this paper, we consider the problem of multi-pitch estimation and tracking of an unknown number of harmonic audio sources. The regularized least-squares is a solution for simultaneous sparse source selection and parameter estimation. Exploiting block sparsity, the method allows for reliable tr...
Pac-bayesian bounds for sparse regression estimation with exponential weights
Alquier, Pierre
2010-01-01
We consider the sparse regression model where the number of parameters $p$ is larger than the sample size $n$. The difficulty when considering high-dimensional problems is to propose estimators achieving a good compromise between statistical and computational performances. The BIC estimator for instance performs well from the statistical point of view \\cite{BTW07} but can only be computed for values of $p$ of at most a few tens. The Lasso estimator is solution of a convex minimization problem, hence computable for large value of $p$. However stringent conditions on the design are required to establish fast rates of convergence for this estimator. Dalalyan and Tsybakov \\cite{arnak} propose a method achieving a good compromise between the statistical and computational aspects of the problem. Their estimator can be computed for reasonably large $p$ and satisfies nice statistical properties under weak assumptions on the design. However, \\cite{arnak} proposes sparsity oracle inequalities in expectation for the emp...
Directory of Open Access Journals (Sweden)
Artur eDomurat
2015-08-01
Full Text Available This paper has two aims. First, we investigate how often people make choices conforming to Bayes’ rule when natural sampling is applied. Second, we show that using Bayes’ rule is not necessary to make choices satisfying Bayes’ rule. Simpler methods, even fallacious heuristics, might prescribe correct choices reasonably often under specific circumstances. We considered elementary situations with binary sets of hypotheses and data. We adopted an ecological approach and prepared two-stage computer tasks resembling natural sampling. Probabilistic relations were to be inferred from a set of pictures, followed by a choice between the data which was made to maximize a chance for a preferred outcome. Using Bayes’ rule was deduced indirectly from choices.Study 1 (N=60 followed a 2 (gender: female vs. male x 2 (education: humanities vs. pure sciences between-subjects factorial design with balanced cells, and a number of correct choices as a dependent variable. Choices satisfying Bayes’ rule were dominant. To investigate ways of making choices more directly, we replicated Study 1, adding a task with a verbal report. In Study 2 (N=76 choices conforming to Bayes’ rule dominated again. However, the verbal reports revealed use of a new, non-inverse rule, which always renders correct choices, but is easier than Bayes’ rule to apply. It does not require inversing conditions (transforming P(H and P(D|H into P(H|D when computing chances. Study 3 examined efficiency of the three fallacious heuristics (pre-Bayesian, representativeness, and evidence-only in producing choices concordant with Bayes’ rule. Computer-simulated scenarios revealed that the heuristics produce correct choices reasonably often under specific base rates and likelihood ratios. Summing up we conclude that natural sampling leads to most choices conforming to Bayes’ rule. However, people tend to replace Bayes’ rule with simpler methods, and even use of fallacious heuristics may
Mani-Varnosfaderani, Ahmad; Kanginejad, Atefeh; Gilany, Kambiz; Valadkhani, Abolfazl
2016-10-12
The present work deals with the development of a new baseline correction method based on the comparative learning capabilities of artificial neural networks. The developed method uses the Bayes probability theorem for prevention of the occurrence of the over-fitting and finding a generalized baseline. The developed method has been applied on simulated and real metabolomic gas-chromatography (GC) and Raman data sets. The results revealed that the proposed method can be used to handle different types of baselines with cave, convex, curvelinear, triangular and sinusoidal patterns. For further evaluation of the performances of this method, it has been compared with benchmarking baseline correction methods such as corner-cutting (CC), morphological weighted penalized least squares (MPLS), adaptive iteratively-reweighted penalized least squares (airPLS) and iterative polynomial fitting (iPF). In order to compare the methods, the projected difference resolution (PDR) criterion has been calculated for the data before and after the baseline correction procedure. The calculated values of PDR after the baseline correction using iBRANN, airPLS, MPLS, iPF and CC algorithms for the GC metabolomic data were 4.18, 3.64, 3.88, 1.88 and 3.08, respectively. The obtained results in this work demonstrated that the developed iterative Bayesian regularized neural network (iBRANN) method in this work thoroughly detects the baselines and is superior over the CC, MPLS, airPLS and iPF techniques. A graphical user interface has been developed for the suggested algorithm and can be used for easy implementation of the iBRANN algorithm for the correction of different chromatography, NMR and Raman data sets.
Walczak, Michał; Grubmüller, Helmut
2014-08-01
We developed a Bayesian method to extract macromolecular structure information from sparse single-molecule x-ray free-electron laser diffraction images. The method addresses two possible scenarios. First, using a "seed" structural model, the molecular orientation is determined for each of the provided diffraction images, which are then averaged in three-dimensional reciprocal space. Subsequently, the real space electron density is determined using a relaxed averaged alternating reflections algorithm. In the second approach, the probability that the "seed" model fits to the given set of diffraction images as a whole is determined and used to distinguish between proposed structures. We show that for a given x-ray intensity, unexpectedly, the achievable resolution increases with molecular mass such that structure determination should be more challenging for small molecules than for larger ones. For a sufficiently large number of recorded photons (>200) per diffraction image an M^{1/6} scaling is seen. Using synthetic diffraction data for a small glutathione molecule as a challenging test case, successful determination of electron density was demonstrated for 20000 diffraction patterns with random orientations and an average of 82 elastically scattered and recorded photons per image, also in the presence of up to 50% background noise. The second scenario is exemplified and assessed for three biomolecules of different sizes. In all cases, determining the probability of a structure given set of diffraction patterns allowed successful discrimination between different conformations of the test molecules. A structure model of the glutathione tripeptide was refined in a Monte Carlo simulation from a random starting conformation. Further, effective distinguishing between three differently arranged immunoglobulin domains of a titin molecule and also different states of a ribosome in a tRNA translocation process was demonstrated. These results show that the proposed method is
Burgess, Ralph; Yang, Ziheng
2008-09-01
Estimation of population parameters for the common ancestors of humans and the great apes is important in understanding our evolutionary history. In particular, inference of population size for the human-chimpanzee common ancestor may shed light on the process by which the 2 species separated and on whether the human population experienced a severe size reduction in its early evolutionary history. In this study, the Bayesian method of ancestral inference of Rannala and Yang (2003. Bayes estimation of species divergence times and ancestral population sizes using DNA sequences from multiple loci. Genetics. 164:1645-1656) was extended to accommodate variable mutation rates among loci and random species-specific sequencing errors. The model was applied to analyze a genome-wide data set of approximately 15,000 neutral loci (7.4 Mb) aligned for human, chimpanzee, gorilla, orangutan, and macaque. We obtained robust and precise estimates for effective population sizes along the hominoid lineage extending back approximately 30 Myr to the cercopithecoid divergence. The results showed that ancestral populations were 5-10 times larger than modern humans along the entire hominoid lineage. The estimates were robust to the priors used and to model assumptions about recombination. The unusually low X chromosome divergence between human and chimpanzee could not be explained by variation in the male mutation bias or by current models of hybridization and introgression. Instead, our parameter estimates were consistent with a simple instantaneous process for human-chimpanzee speciation but showed a major reduction in X chromosome effective population size peculiar to the human-chimpanzee common ancestor, possibly due to selective sweeps on the X prior to separation of the 2 species.
Bayesian estimates of male and female African lion mortality for future use in population management
DEFF Research Database (Denmark)
Barthold, Julia A; Loveridge, Andrew; Macdonald, David;
2016-01-01
1. The global population size of African lions is plummeting, and many small fragmented populations face local extinction. Extinction risks are amplified through the common practice of trophy hunting for males, which makes setting sustainable hunting quotas a vital task. 2. Various demographic...... models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...
Directory of Open Access Journals (Sweden)
M. A. Zotov
2016-01-01
Full Text Available An improved algorithm for the synthesis of the secondary structure of algebraic Bayesian networks represented by a minimal join graph is proposed in the paper. The algorithm differs from the previously offered one so that it relies on the incremental principle, uses specially selected edges and, finally, eliminates redundant edges by a greedy algorithm. The correct operation of the incremental algorithm is mathematically proved. Comparison of the computational complexity of the new (incremental algorithm implementation and two well-known (greedy and direct is made by means of statistical estimates of complexity, based on the sample values of the runtime ratio of software implementations of two compared algorithms. Theoretical complexity estimates of the greedy and direct algorithms have been obtained earlier, but are not suitable for comparative analysis, as they are based on the hidden characteristics of the secondary structure, which can be calculated only when it is built. To minimize the influence of random factors at calculating the ratio average program runtime is used obtained by N launches on the same set of workloads. The sample values of ratio is formed for M sets of equal power K. According to the sample values the median is calculated, as well as the other statistics that characterize the spread: borders of the 97% confidence interval along with the first and the third quartiles. Sets of loads are stochastically generated according to the specified parameters using the algorithm described in the paper. The stochastic algorithms generating a set of loads with given power, as well as collecting the statistical data and calculating of statistical estimates of the ratio of forward and greedy algorithm to the incremental algorithm runtimes is described in the paper. A series of experiments is carried out in which N is changed in the range 1, 2 ... 9, 10, 26, 42 ... 170.They have showed that the incremental algorithm speed exceeds the
Bayesian estimates of male and female African lion mortality for future use in population management
DEFF Research Database (Denmark)
Barthold, Julia A; Loveridge, Andrew; Macdonald, David
2016-01-01
models evaluate consequences of hunting on lion population growth. However, none of the models use unbiased estimates of male age-specific mortality because such estimates do not exist. Until now, estimating mortality from resighting records of marked males has been impossible due to the uncertain fates...... higher mortality across all ages in both populations. We discuss the role that different drivers of lion mortality may play in explaining these differences and whether their effects need to be included in lion demographic models. 5. Synthesis and applications. Our mortality estimates can be used......1. The global population size of African lions is plummeting, and many small fragmented populations face local extinction. Extinction risks are amplified through the common practice of trophy hunting for males, which makes setting sustainable hunting quotas a vital task. 2. Various demographic...
Directory of Open Access Journals (Sweden)
SANKU DEY
2010-11-01
Full Text Available The generalized exponential (GE distribution proposed by Gupta and Kundu (1999 is an important lifetime distribution in survival analysis. In this article, we propose to obtain Bayes estimators and its associated risk based on a class of non-informative prior under the assumption of three loss functions, namely, quadratic loss function (QLF, squared log-error loss function (SLELF and general entropy loss function (GELF. The motivation is to explore the most appropriate loss function among these three loss functions. The performances of the estimators are, therefore, compared on the basis of their risks obtained under QLF, SLELF and GELF separately. The relative efficiency of the estimators is also obtained. Finally, Monte Carlo simulations are performed to compare the performances of the Bayes estimates under different situations.
Yuan, Ying; MacKinnon, David P.
2009-01-01
In this article, we propose Bayesian analysis of mediation effects. Compared with conventional frequentist mediation analysis, the Bayesian approach has several advantages. First, it allows researchers to incorporate prior information into the mediation analysis, thus potentially improving the efficiency of estimates. Second, under the Bayesian…
Directory of Open Access Journals (Sweden)
Amel Adel
Full Text Available A large-scale study on canine Leishmania infection (CanL was conducted in six localities along a west-east transect in the Algerian littoral zone (Tlemcen, Mostaganem, Tipaza, Boumerdes, Bejaia, Jijel and covering two sampling periods. In total 2,184 dogs were tested with an indirect fluorescent antibody test (IFAT and a direct agglutination test (DAT. Combined multiple-testing and several statistical methods were compared to estimate the CanL true prevalence and tests characteristics (sensitivity and specificity. The Bayesian full model showed the best fit and yielded prevalence estimates between 11% (Mostaganem, first period and 38% (Bejaia, second period. Sensitivity of IFAT varied (in function of locality between 86% and 88% while its specificity varied between 65% and 87%. DAT was less sensitive than IFAT but showed a higher specificity (between 80% and 95% in function of locality or/and season. A general increasing trend of the CanL prevalence was noted from west to east. A concordance between the present results and the incidence of human cases of visceral leishmaniasis was observed, where also a maximum was recorded for Bejaia. The results of the present study highlight the dangers when using IFAT as a gold standard.
Hosseini, Bamdad; Pichardo, Samuel; Constanciel, Elodie; Drake, James M; Stockie, John M
2016-01-01
High intensity focused ultrasound is a non-invasive method for treatment of diseased tissue that uses a beam of ultrasound in order to generate heat within a small volume. A common challenge in application of this technique is that heterogeneity of the biological medium can defocus the ultrasound beam. In this study, the problem of refocusing the beam is reduced to the Bayesian inverse problem of estimating the acoustic aberration due to the biological tissue from acoustic radiative force imaging data. The solution to this problem is a posterior probability density on the aberration which is sampled using a Metropolis-within-Gibbs algorithm. The framework is tested using both a synthetic and experimental dataset. This new approach has the ability to obtain a good estimate of the aberrations from a small dataset, as little as 32 sonication tests, which can lead to significant speedup in the treatment process. Furthermore, this framework is very flexible and can work with a wide range of sonication tests and so...
Adel, Amel; Abatih, Emmanuel; Speybroeck, Niko; Soukehal, Abdelkrim; Bouguedour, Rachid; Boughalem, Karim; Bouhbal, Abdelmalek; Djerbal, Mouloud; Saegerman, Claude; Berkvens, Dirk
2015-01-01
A large-scale study on canine Leishmania infection (CanL) was conducted in six localities along a west-east transect in the Algerian littoral zone (Tlemcen, Mostaganem, Tipaza, Boumerdes, Bejaia, Jijel) and covering two sampling periods. In total 2,184 dogs were tested with an indirect fluorescent antibody test (IFAT) and a direct agglutination test (DAT). Combined multiple-testing and several statistical methods were compared to estimate the CanL true prevalence and tests characteristics (sensitivity and specificity). The Bayesian full model showed the best fit and yielded prevalence estimates between 11% (Mostaganem, first period) and 38% (Bejaia, second period). Sensitivity of IFAT varied (in function of locality) between 86% and 88% while its specificity varied between 65% and 87%. DAT was less sensitive than IFAT but showed a higher specificity (between 80% and 95% in function of locality or/and season). A general increasing trend of the CanL prevalence was noted from west to east. A concordance between the present results and the incidence of human cases of visceral leishmaniasis was observed, where also a maximum was recorded for Bejaia. The results of the present study highlight the dangers when using IFAT as a gold standard.
Dediu, Dan
2011-02-07
Language is a hallmark of our species and understanding linguistic diversity is an area of major interest. Genetic factors influencing the cultural transmission of language provide a powerful and elegant explanation for aspects of the present day linguistic diversity and a window into the emergence and evolution of language. In particular, it has recently been proposed that linguistic tone-the usage of voice pitch to convey lexical and grammatical meaning-is biased by two genes involved in brain growth and development, ASPM and Microcephalin. This hypothesis predicts that tone is a stable characteristic of language because of its 'genetic anchoring'. The present paper tests this prediction using a Bayesian phylogenetic framework applied to a large set of linguistic features and language families, using multiple software implementations, data codings, stability estimations, linguistic classifications and outgroup choices. The results of these different methods and datasets show a large agreement, suggesting that this approach produces reliable estimates of the stability of linguistic data. Moreover, linguistic tone is found to be stable across methods and datasets, providing suggestive support for the hypothesis of genetic influences on its distribution.
Yagüe, G; Goyache, F; Becerra, J; Moreno, C; Sánchez, L; Altarriba, J
2009-08-01
A total of 5253 records obtained from 2081 Rubia Gallega beef cows managed using artificial insemination as the only reproduction system were analysed to estimate genetic parameters for days to first insemination (DFI), days from first insemination to conception (FIC), number of inseminations per conception (IN), days open (DO), gestation length (GL) and calving interval (CI) via multitrait Bayesian procedures. Estimates of the mean of posterior distribution of the heritability of DFI, FIC, IN, DO, GL and CI were, respectively, 0.050, 0.078, 0.071, 0.053, 0.037 and 0.085 and the corresponding estimates for repeatability of these traits were 0.116, 0.129, 0.147, 0.138, 0.082 and 0.132, respectively. No significant genetic correlations associated to DFI or GL were found. However, genetic correlations between the other four analysed traits were high and significant. Genetic correlations between FIC and IN, DO and CI were similar and higher than 0.85. Genetic correlations of IN-DO and IN-CI were over 0.65. The highest genetic correlation was estimated for the pair DO-CI (0.992) that can be considered the same trait in genetic terms. Results indicated that DFI can be highly affected by non-genetic factors thus limiting its usefulness to be used as an earlier indicator of reproductive performance in beef cattle. Moreover, GL could not be associated to the reproductive performance of the cow before conception. The other four analysed traits, FIC, IN, DO and CI, have close genetic relationships. The inclusion of IN as an earlier indicator of fertility in beef cattle improvement programs using artificial insemination as the main reproductive system can be advisable due to the low additional recording effort needed.
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
Directory of Open Access Journals (Sweden)
Olga L. Quintero
Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.
DEFF Research Database (Denmark)
Shutin, Dmitriy; Fleury, Bernard Henri
2011-01-01
channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... scattering, calibration and discretization errors, allowing for a robust extraction of the relevant multipath components. The performance of the sparse VB-SAGE algorithm and its advantages over conventional channel estimation methods are demonstrated in synthetic single-input-multiple-output (SIMO) time......-invariant channels. The algorithm is also applied to real measurement data in a multiple-input-multiple-output (MIMO) time-invariant context....
SMC methods to avoid self-resolving for online Bayesian parameter estimation
Aoki, E.H.; Boers, Y.; Mandal, Pranab K.; Bagchi, Arunabha
2012-01-01
The particle filter is a powerful filtering technique that is able to handle a broad scope of nonlinear problems. However, it has also limitations: a standard particle filter is unable to handle, for instance, systems that include static variables (parameters) to be estimated together with the dynam
DEFF Research Database (Denmark)
Burgess, Stephen; Thompson, Simon G; Andrews, G
2010-01-01
Genetic markers can be used as instrumental variables, in an analogous way to randomization in a clinical trial, to estimate the causal relationship between a phenotype and an outcome variable. Our purpose is to extend the existing methods for such Mendelian randomization studies to the context o...
SMC methods to avoid self-resolving for online Bayesian parameter estimation
Aoki, E.H.; Boers, Y.; Mandal, P.K.; Bagchi, Arunabha
2012-01-01
The particle filter is a powerful filtering technique that is able to handle a broad scope of nonlinear problems. However, it has also limitations: a standard particle filter is unable to handle, for instance, systems that include static variables (parameters) to be estimated together with the dynam
SMC methods to avoid self-resolving for online Bayesian parameter estimation
Aoki, E.H.; Boers, Y.; Mandal, Pranab K.; Bagchi, Arunabha
The particle filter is a powerful filtering technique that is able to handle a broad scope of nonlinear problems. However, it has also limitations: a standard particle filter is unable to handle, for instance, systems that include static variables (parameters) to be estimated together with the
Directory of Open Access Journals (Sweden)
M. J. Vilar
2015-01-01
Full Text Available Bayesian analysis was used to estimate the pig’s and herd’s true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data, prior true prevalence (first step, and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2–70.9. The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3–82.4. The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H
2015-01-01
Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.
BayesLine: Bayesian Inference for Spectral Estimation of Gravitational Wave Detector Noise
Littenberg, Tyson B
2014-01-01
Gravitational wave data from ground-based detectors is dominated by instrument noise. Signals will be comparatively weak, and our understanding of the noise will influence detection confidence and signal characterization. Mis-modeled noise can produce large systematic biases in both model selection and parameter estimation. Here we introduce a multi-component, variable dimension, parameterized model to describe the Gaussian-noise power spectrum for data from ground-based gravitational wave interferometers. Called BayesLine, the algorithm models the noise power spectral density using cubic splines for smoothly varying broad-band noise and Lorentzians for narrow-band line features in the spectrum. We describe the algorithm and demonstrate its performance on data from the fifth and sixth LIGO science runs. Once fully integrated into LIGO/Virgo data analysis software, BayesLine will produce accurate spectral estimation and provide a means for marginalizing inferences drawn from the data over all plausible noise s...
Efficient Bayesian estimation of Markov model transition matrices with given stationary distribution
Trendelkamp-Schroer, Benjamin
2013-01-01
Direct simulation of biomolecular dynamics in thermal equilibrium is challenging due to the metastable nature of conformation dynamics and the computational cost of molecular dynamics. Biased or enhanced sampling methods may improve the convergence of expectation values of equilibrium probabilities and expectation values of stationary quantities significantly. Unfortunately the convergence of dynamic observables such as correlation functions or timescales of conformational transitions relies on direct equilibrium simulations. Markov state models are well suited to describe both, stationary properties and properties of slow dynamical processes of a molecular system, in terms of a transition matrix for a jump process on a suitable discretiza- tion of continuous conformation space. Here, we introduce statistical estimation methods that allow a priori knowledge of equilibrium probabilities to be incorporated into the estimation of dynamical observables. Both, maximum likelihood methods and an improved Monte Carlo...
Long, Quan
2014-01-06
Shannon-type expected information gain is an important utility in evaluating the usefulness of a proposed experiment that involves uncertainty. Its estimation, however, cannot rely solely on Monte Carlo sampling methods, that are generally too computationally expensive for realistic physical models, especially for those involving the solution of stochastic partial differential equations. In this work we present a new methodology, based on the Laplace approximation of the posterior probability density function, to accelerate the estimation of expected information gain in the model parameters and predictive quantities of interest. Furthermore, in order to deal with the issue of dimensionality in a complex problem, we use sparse quadratures for the integration over the prior. We show the accuracy and efficiency of the proposed method via several nonlinear numerical examples, including a single parameter design of one dimensional cubic polynomial function and the current pattern for impedance tomography.
Bayesian model comparison and model averaging for small-area estimation
Aitkin, Murray; Liu, Charles C.; Chadwick, Tom
2009-01-01
This paper considers small-area estimation with lung cancer mortality data, and discusses the choice of upper-level model for the variation over areas. Inference about the random effects for the areas may depend strongly on the choice of this model, but this choice is not a straightforward matter. We give a general methodology for both evaluating the data evidence for different models and averaging over plausible models to give robust area effect distributions. We reanalyze the data of Tsutak...
Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
Directory of Open Access Journals (Sweden)
Xiaoying Tang
Full Text Available This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.
Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.
Tang, Xiaoying; Oishi, Kenichi; Faria, Andreia V; Hillis, Argye E; Albert, Marilyn S; Mori, Susumu; Miller, Michael I
2013-01-01
This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.
Estimating the diets of animals using stable isotopes and a comprehensive Bayesian mixing model.
Directory of Open Access Journals (Sweden)
John B Hopkins
Full Text Available Using stable isotope mixing models (SIMMs as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals--each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level. In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs.
Long, Quan
2013-06-01
Shannon-type expected information gain can be used to evaluate the relevance of a proposed experiment subjected to uncertainty. The estimation of such gain, however, relies on a double-loop integration. Moreover, its numerical integration in multi-dimensional cases, e.g., when using Monte Carlo sampling methods, is therefore computationally too expensive for realistic physical models, especially for those involving the solution of partial differential equations. In this work, we present a new methodology, based on the Laplace approximation for the integration of the posterior probability density function (pdf), to accelerate the estimation of the expected information gains in the model parameters and predictive quantities of interest. We obtain a closed-form approximation of the inner integral and the corresponding dominant error term in the cases where parameters are determined by the experiment, such that only a single-loop integration is needed to carry out the estimation of the expected information gain. To deal with the issue of dimensionality in a complex problem, we use a sparse quadrature for the integration over the prior pdf. We demonstrate the accuracy, efficiency and robustness of the proposed method via several nonlinear numerical examples, including the designs of the scalar parameter in a one-dimensional cubic polynomial function, the design of the same scalar in a modified function with two indistinguishable parameters, the resolution width and measurement time for a blurred single peak spectrum, and the boundary source locations for impedance tomography in a square domain. © 2013 Elsevier B.V.
Bayesian Prediction Model Based on Attribute Weighting and Kernel Density Estimations
Directory of Open Access Journals (Sweden)
Zhong-Liang Xiang
2015-01-01
Full Text Available Although naïve Bayes learner has been proven to show reasonable performance in machine learning, it often suffers from a few problems with handling real world data. First problem is conditional independence; the second problem is the usage of frequency estimator. Therefore, we have proposed methods to solve these two problems revolving around naïve Bayes algorithms. By using an attribute weighting method, we have been able to handle conditional independence assumption issue, whereas, for the case of the frequency estimators, we have found a way to weaken the negative effects through our proposed smooth kernel method. In this paper, we have proposed a compact Bayes model, in which a smooth kernel augments weights on likelihood estimation. We have also chosen an attribute weighting method which employs mutual information metric to cooperate with the framework. Experiments have been conducted on UCI benchmark datasets and the accuracy of our proposed learner has been compared with that of standard naïve Bayes. The experimental results have demonstrated the effectiveness and efficiency of our proposed learning algorithm.
Florez, C.; Romero, M. A.; Ramirez, M. I.; Monsalve, G.
2013-05-01
In the elaboration of a hydrogeological conceptual model in regions of mining exploration where there is significant presence of crystalline massif rocks., the influence of physical and geometrical properties of rock discontinuities must be evaluated. We present the results of a structural analysis of rock discontinuities in a region of the Central Cordillera of Colombia (The upper and middle Bermellon Basin) in order to establish its hydrogeological characteristics for the improvement of the conceptual hydrogeological model for the region. The geology of the study area consists of schists with quartz and mica and porphyritic rocks, in a region of high slopes with a nearly 10 m thick weathered layer. The main objective of this research is to infer the preferential flow directions of groundwater and to estimate the tensor of potential hydraulic conductivity by using surface information and avoiding the use of wells and packer tests. The first step of our methodology is an analysis of drainage directions to detect patterns of structural controls in the run-off; after a field campaign of structural data recollection, where we compile information of strike, dip, continuity, spacing, roughness, aperture and frequency, we built equal area hydro-structural polar diagrams that indicate the potential directions for groundwater flow. These results are confronted with records of Rock Quality Designation (RQD) that have been systematically taken from several mining exploration boreholes in the area of study. By using all this information we estimate the potential tensor of hydraulic conductivity from a cubic law, obtaining the three principal directions with conductivities of the order of 10-5 and 10-6 m/s; the more conductive joint family has a NE strike with a nearly vertical dip.
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Jun; Jung, Wondea Jung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
Some researchers recognized Bayesian belief network (BBN) method to be a promising method of quantifying software reliability. Brookhaven National Laboratory (BNL) comprehensively reviewed various quantitative software reliability methods to identify the most promising methods for use in probabilistic safety assessments (PSAs) of digital systems of NPPs against a set of the most desirable characteristics developed therein. BBNs are recognized as a promising way of quantifying software reliability and are useful for integrating many aspects of software engineering and quality assurance. The method explicitly incorporates important factors relevant to reliability, such as the quality of the developer, the development process, problem complexity, testing effort, and the operation environment. In this work, a BBN model was developed to estimate the number of remained defects in a safety-critical software based on the quality evaluation of software development life cycle (SDLC). Even though a number of software reliability evaluation methods exist, none of them can be applicable to the safety-critical software in an NPP because software quality in terms of PDF is required for the PSA.
Wiegmann, Douglas A.a
2005-01-01
The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.
Olive, Marie-Marie; Grosbois, Vladimir; Tran, Annelise; Nomenjanahary, Lalaina Arivony; Rakotoarinoro, Mihaja; Andriamandimby, Soa-Fy; Rogier, Christophe; Heraud, Jean-Michel; Chevalier, Veronique
2017-01-01
The force of infection (FOI) is one of the key parameters describing the dynamics of transmission of vector-borne diseases. Following the occurrence of two major outbreaks of Rift Valley fever (RVF) in Madagascar in 1990–91 and 2008–09, recent studies suggest that the pattern of RVF virus (RVFV) transmission differed among the four main eco-regions (East, Highlands, North-West and South-West). Using Bayesian hierarchical models fitted to serological data from cattle of known age collected during two surveys (2008 and 2014), we estimated RVF FOI and described its variations over time and space in Madagascar. We show that the patterns of RVFV transmission strongly differed among the eco-regions. In the North-West and Highlands regions, these patterns were synchronous with a high intensity in mid-2007/mid-2008. In the East and South-West, the peaks of transmission were later, between mid-2008 and mid-2010. In the warm and humid northwestern eco-region favorable to mosquito populations, RVFV is probably transmitted all year-long at low-level during inter-epizootic period allowing its maintenance and being regularly introduced in the Highlands through ruminant trade. The RVF surveillance of animals of the northwestern region could be used as an early warning indicator of an increased risk of RVF outbreak in Madagascar. PMID:28051125
Directory of Open Access Journals (Sweden)
Heringstad Bjørg
2010-07-01
Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Thompson, R. L.; Gerbig, C.; Roedenbeck, C.; Heimann, M.
2009-04-01
The nitrous oxide (N2O) mixing ratio has been increasing in the atmosphere since the industrial revolution, from 270 ppb in 1750 to 320 ppb in 2007 with a steady growth rate of around 0.26% since the early 1980's. The increase in N2O is worrisome for two main reasons. First, it is a greenhouse gas; this means that its atmospheric increase translates to an enhancement in radiative forcing of 0.16 ± 0.02 Wm-2 making it currently the fourth most important long-lived greenhouse gas and is predicted to soon overtake CFC's to become the third most important. Second, it plays an important role in stratospheric ozone chemistry. Human activities are the primary cause of the atmospheric N2O increase. The largest anthropogenic source of N2O is from the use of N-fertilizers in agriculture but fossil fuel combustion and industrial processes, such as adipic and nitric acid production, are also important. We present a Bayesian inversion approach for estimating N2O fluxes over central and western Europe using high frequency in-situ concentration data from the Ochsenkopf tall tower (50 °01â²N, 11 °48â², 1022 masl). For the inversion, we employ a Lagrangian-type transport model, STILT, which provides source-receptor relationships at 10 km using ECMWF meteorological data. The a priori flux estimates used were from IER, for anthropogenic, and GEIA, for natural fluxes. N2O fluxes were retrieved monthly at 2 x 2 degree spatial resolution for 2007. The retrieved N2O fluxes showed significantly more spatial heterogeneity than in the a priori field and considerable seasonal variability. The timing of peak emissions was different for different regions but in general the months with the strongest emissions were May and August. Overall, the retrieved flux (anthropogenic and natural) was lower than in the a priori field.
Directory of Open Access Journals (Sweden)
Brad L Smith
Full Text Available Previous genetic studies of Atlantic swordfish (Xiphias gladius L. revealed significant differentiation among Mediterranean, North Atlantic and South Atlantic populations using both mitochondrial and nuclear DNA data. However, limitations in geographic sampling coverage, and the use of single loci, precluded an accurate placement of boundaries and of estimates of admixture. In this study, we present multilocus analyses of 26 single nucleotide polymorphisms (SNPs within 10 nuclear genes to estimate population differentiation and admixture based on the characterization of 774 individuals representing North Atlantic, South Atlantic, and Mediterranean swordfish populations. Pairwise FST values, AMOVA, PCoA, and Bayesian individual assignments support the differentiation of swordfish inhabiting these three basins, but not the current placement of the boundaries that separate them. Specifically, the range of the South Atlantic population extends beyond 5°N management boundary to 20°N-25°N from 45°W. Likewise the Mediterranean population extends beyond the current management boundary at the Strait of Gibraltar to approximately 10°W. Further, admixture zones, characterized by asymmetric contributions of adjacent populations within samples, are confined to the Northeast Atlantic. While South Atlantic and Mediterranean migrants were identified within these Northeast Atlantic admixture zones no North Atlantic migrants were identified respectively in these two neighboring basins. Owing to both, the characterization of larger number of loci and a more ample spatial sampling coverage, it was possible to provide a finer resolution of the boundaries separating Atlantic swordfish populations than previous studies. Finally, the patterns of population structure and admixture are discussed in the light of the reproductive biology, the known patterns of dispersal, and oceanographic features that may act as barriers to gene flow to Atlantic swordfish.
Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier
2013-04-01
Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up
Noguera, J L; Varona, L; Babot, D; Estany, J
2002-10-01
A total of 66,620 records from the first six parities for number of piglets born alive (NBA) from 20,120 Landrace sows and 24,426 records for weight (WT) and backfat thickness (BT) at 175 d of age were analyzed to estimate genetic parameters. The pedigree consisted of 47,186 individuals, including 392 sires and 5,394 dams. Estimates were based on marginal posterior distribution of the genetic parameters obtained using Bayesian inference implemented via the Gibbs sampling procedure with a Data Augmentation step. The posterior means and posterior standard deviation (PSD) for heritability of NBA ranged from 0.064 (PSD 0.005) in the first parity to 0.146 (PSD 0.019) in the sixth parity, always increasing with the order of the parity. The posterior means for genetic correlations of litter size between adjacent parities were, in most cases, greater than 0.80. However, genetic correlation were much lower between nonadjacent parities. For example, the genetic correlation was 0.534 (PSD 0.061) between the fourth and the sixth parity for NBA. The posterior means of heritability for WT and BT were 0.229 (PSD 0.018) and 0.350 (PSD 0.019), respectively. Posterior mean for genetic correlation between WT and BT was 0.339 (PSD 0.044). The posterior means for genetic correlation between production (WT and BT) and reproduction traits (NBA in different parities) were close to zero in most cases. Results from this study suggest that different parities should be considered as different traits. Moreover, selection for growth and backfat should result in no or very little correlated response in litter size.
Inazu, Daisuke; Pulido, Nelson; Fukuyama, Eiichi; Saito, Tatsuhiko; Senda, Jouji; Kumagai, Hiroyuki
2016-05-01
We have developed a near-field tsunami forecast system based on an automatic centroid moment tensor (CMT) estimation using regional broadband seismic observation networks in the regions of Indonesia, the Philippines, and Chile. The automatic procedure of the CMT estimation has been implemented to estimate tsunamigenic earthquakes. A tsunami propagation simulation model is used for the forecast and hindcast. A rectangular fault model based on the estimated CMT is employed to represent the initial condition of tsunami height. The forecast system considers uncertainties due to two possible fault planes and two possible scaling laws and thus shows four possible scenarios with these associated uncertainties for each estimated CMT. The system requires approximately 15 min to estimate the CMT after the occurrence of an earthquake and approximately another 15 min to make the tsunami forecast results including the maximum tsunami height and its arrival time at the epicentral region and near-field coasts available. The retrospectively forecasted tsunamis were evaluated by the deep-sea pressure and tide gauge observations, for the past eight tsunamis ( M w 7.5-8.6) that occurred throughout the regional seismic networks. The forecasts ranged from half to double the amplitudes of the deep-sea pressure observations and ranged mostly within the same order of magnitude as the maximum heights of the tide gauge observations. It was found that the forecast uncertainties increased for greater earthquakes (e.g., M w > 8) because the tsunami source was no longer approximated as a point source for such earthquakes. The forecast results for the coasts nearest to the epicenter should be carefully used because the coasts often experience the highest tsunamis with the shortest arrival time (e.g., <30 min).
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Directory of Open Access Journals (Sweden)
Mohammed Hussni O
2010-06-01
Full Text Available Abstract Background Cryptosporidium parvum is one of the most important biological contaminants in drinking water that produces life threatening infection in people with compromised immune systems. Dairy calves are thought to be the primary source of C. parvum contamination in watersheds. Understanding the spatial and temporal variation in the risk of C. parvum infection in dairy cattle is essential for designing cost-effective watershed management strategies to protect drinking water sources. Crude and Bayesian seasonal risk estimates for Cryptosporidium in dairy calves were used to investigate the spatio-temporal dynamics of C. parvum infection on dairy farms in the New York City watershed. Results Both global (Global Moran's I and specific (SaTScan cluster analysis methods revealed a significant (p C. parvum infection in all herds in the summer (p = 0.002, compared to the rest of the year. Bayesian estimates did not show significant spatial autocorrelation in any season. Conclusions Although we were not able to identify seasonal clusters using Bayesian approach, crude estimates highlighted both temporal and spatial clusters of C. parvum infection in dairy herds in a major watershed. We recommend that further studies focus on the factors that may lead to the presence of C. parvum clusters within the watershed, so that monitoring and prevention practices such as stream monitoring, riparian buffers, fencing and manure management can be prioritized and improved, to protect drinking water supplies and public health.
Directory of Open Access Journals (Sweden)
Xiaokang Kou
2016-01-01
Full Text Available Land surface temperature (LST plays a major role in the study of surface energy balances. Remote sensing techniques provide ways to monitor LST at large scales. However, due to atmospheric influences, significant missing data exist in LST products retrieved from satellite thermal infrared (TIR remotely sensed data. Although passive microwaves (PMWs are able to overcome these atmospheric influences while estimating LST, the data are constrained by low spatial resolution. In this study, to obtain complete and high-quality LST data, the Bayesian Maximum Entropy (BME method was introduced to merge 0.01° and 0.25° LSTs inversed from MODIS and AMSR-E data, respectively. The result showed that the missing LSTs in cloudy pixels were filled completely, and the availability of merged LSTs reaches 100%. Because the depths of LST and soil temperature measurements are different, before validating the merged LST, the station measurements were calibrated with an empirical equation between MODIS LST and 0~5 cm soil temperatures. The results showed that the accuracy of merged LSTs increased with the increasing quantity of utilized data, and as the availability of utilized data increased from 25.2% to 91.4%, the RMSEs of the merged data decreased from 4.53 °C to 2.31 °C. In addition, compared with the filling gap method in which MODIS LST gaps were filled with AMSR-E LST directly, the merged LSTs from the BME method showed better spatial continuity. The different penetration depths of TIR and PMWs may influence fusion performance and still require further studies.
Dettmer, J.; Hossen, M. J.; Cummins, P. R.
2014-12-01
This paper develops a Bayesian inversion to infer spatio-temporal parameters of the tsunami source (sea surface) due to megathrust earthquakes. To date, tsunami-source parameter uncertainties are poorly studied. In particular, the effects of parametrization choices (e.g., discretisation, finite rupture velocity, dispersion) on uncertainties have not been quantified. This approach is based on a trans-dimensional self-parametrization of the sea surface, avoids regularization, and provides rigorous uncertainty estimation that accounts for model-selection ambiguity associated with the source discretisation. The sea surface is parametrized using self-adapting irregular grids which match the local resolving power of the data and provide parsimonious solutions for complex source characteristics. Finite and spatially variable rupture velocity fields are addressed by obtaining causal delay times from the Eikonal equation. Data are considered from ocean-bottom pressure and coastal wave gauges. Data predictions are based on Green-function libraries computed from ocean-basin scale tsunami models for cases that include/exclude dispersion effects. Green functions are computed for elementary waves of Gaussian shape and grid spacing which is below the resolution of the data. The inversion is applied to tsunami waveforms from the great Mw=9.0 2011 Tohoku-Oki (Japan) earthquake. Posterior results show a strongly elongated tsunami source along the Japan trench, as obtained in previous studies. However, we find that the tsunami data is fit with a source that is generally simpler than obtained in other studies, with a maximum amplitude less than 5 m. In addition, the data are sensitive to the spatial variability of rupture velocity and require a kinematic source model to obtain satisfactory fits which is consistent with other work employing linear multiple time-window parametrizations.
Strictly nonnegative tensors and nonnegative tensor partition
Institute of Scientific and Technical Information of China (English)
HU ShengLong; HUANG ZhengHai; QI LiQun
2014-01-01
We introduce a new class of nonnegative tensors—strictly nonnegative tensors.A weakly irreducible nonnegative tensor is a strictly nonnegative tensor but not vice versa.We show that the spectral radius of a strictly nonnegative tensor is always positive.We give some necessary and su？cient conditions for the six wellconditional classes of nonnegative tensors,introduced in the literature,and a full relationship picture about strictly nonnegative tensors with these six classes of nonnegative tensors.We then establish global R-linear convergence of a power method for finding the spectral radius of a nonnegative tensor under the condition of weak irreducibility.We show that for a nonnegative tensor T,there always exists a partition of the index set such that every tensor induced by the partition is weakly irreducible;and the spectral radius of T can be obtained from those spectral radii of the induced tensors.In this way,we develop a convergent algorithm for finding the spectral radius of a general nonnegative tensor without any additional assumption.Some preliminary numerical results show the feasibility and effectiveness of the algorithm.
Wang, Yan; Huang, Hong; Huang, Lida; Ristic, Branko
2017-03-01
Source term estimation for atmospheric dispersion deals with estimation of the emission strength and location of an emitting source using all available information, including site description, meteorological data, concentration observations and prior information. In this paper, Bayesian methods for source term estimation are evaluated using Prairie Grass field observations. The methods include those that require the specification of the likelihood function and those which are likelihood free, also known as approximate Bayesian computation (ABC) methods. The performances of five different likelihood functions in the former and six different distance measures in the latter case are compared for each component of the source parameter vector based on Nemenyi test over all the 68 data sets available in the Prairie Grass field experiment. Several likelihood functions and distance measures are introduced to source term estimation for the first time. Also, ABC method is improved in many aspects. Results show that discrepancy measures which refer to likelihood functions and distance measures collectively have significant influence on source estimation. There is no single winning algorithm, but these methods can be used collectively to provide more robust estimates.
A Bayesian method for microseismic source inversion
Pugh, D. J.; White, R. S.; Christie, P. A. F.
2016-08-01
Earthquake source inversion is highly dependent on location determination and velocity models. Uncertainties in both the model parameters and the observations need to be rigorously incorporated into an inversion approach. Here, we show a probabilistic Bayesian method that allows formal inclusion of the uncertainties in the moment tensor inversion. This method allows the combination of different sets of far-field observations, such as P-wave and S-wave polarities and amplitude ratios, into one inversion. Additional observations can be included by deriving a suitable likelihood function from the uncertainties. This inversion produces samples from the source posterior probability distribution, including a best-fitting solution for the source mechanism and associated probability. The inversion can be constrained to the double-couple space or allowed to explore the gamut of moment tensor solutions, allowing volumetric and other non-double-couple components. The posterior probability of the double-couple and full moment tensor source models can be evaluated from the Bayesian evidence, using samples from the likelihood distributions for the two source models, producing an estimate of whether or not a source is double-couple. Such an approach is ideally suited to microseismic studies where there are many sources of uncertainty and it is often difficult to produce reliability estimates of the source mechanism, although this can be true of many other cases. Using full-waveform synthetic seismograms, we also show the effects of noise, location, network distribution and velocity model uncertainty on the source probability density function. The noise has the largest effect on the results, especially as it can affect other parts of the event processing. This uncertainty can lead to erroneous non-double-couple source probability distributions, even when no other uncertainties exist. Although including amplitude ratios can improve the constraint on the source probability
Li, Yan-Rong; Ho, Luis C; Du, Pu; Bai, Jin-Ming
2013-01-01
This is the first paper in a series devoted to systematic study of the size and structure of the broad-line region (BLR) in active galactic nuclei (AGNs) using reverberation mapping (RM) data. We employ a recently developed Bayesian approach that statistically describes the variabibility as a damped random walk process and delineates the BLR structure using a flexible disk geometry that can account for a variety of shapes, including disks, rings, shells, and spheres. We allow for the possibility that the line emission may respond non-linearly to the continuum, and we detrend the light curves when there is clear evidence for secular variation. We use a Markov Chain Monte Carlo implementation based on Bayesian statistics to recover the parameters and uncertainties for the BLR model. The corresponding transfer function is obtained self-consistently. We tentatively constrain the virial factor used to estimate black hole masses; more accurate determinations will have to await velocity-resolved RM data. Application...
Kelbert, Anna; Balch, Christopher C.; Pulkkinen, Antti; Egbert, Gary D.; Love, Jeffrey J.; Rigler, E. Joshua; Fujii, Ikuko
2017-07-01
Geoelectric fields at the Earth's surface caused by magnetic storms constitute a hazard to the operation of electric power grids and related infrastructure. The ability to estimate these geoelectric fields in close to real time and provide local predictions would better equip the industry to mitigate negative impacts on their operations. Here we report progress toward this goal: development of robust algorithms that convolve a magnetic storm time series with a frequency domain impedance for a realistic three-dimensional (3-D) Earth, to estimate the local, storm time geoelectric field. Both frequency domain and time domain approaches are presented and validated against storm time geoelectric field data measured in Japan. The methods are then compared in the context of a real-time application.
Current trends in Bayesian methodology with applications
Upadhyay, Satyanshu K; Dey, Dipak K; Loganathan, Appaia
2015-01-01
Collecting Bayesian material scattered throughout the literature, Current Trends in Bayesian Methodology with Applications examines the latest methodological and applied aspects of Bayesian statistics. The book covers biostatistics, econometrics, reliability and risk analysis, spatial statistics, image analysis, shape analysis, Bayesian computation, clustering, uncertainty assessment, high-energy astrophysics, neural networking, fuzzy information, objective Bayesian methodologies, empirical Bayes methods, small area estimation, and many more topics.Each chapter is self-contained and focuses on
Granade, Christopher; Cory, D G
2015-01-01
In recent years, Bayesian methods have been proposed as a solution to a wide range of issues in quantum state and process tomography. State-of- the-art Bayesian tomography solutions suffer from three problems: numerical intractability, a lack of informative prior distributions, and an inability to track time-dependent processes. Here, we solve all three problems. First, we use modern statistical methods, as pioneered by Husz\\'ar and Houlsby and by Ferrie, to make Bayesian tomography numerically tractable. Our approach allows for practical computation of Bayesian point and region estimators for quantum states and channels. Second, we propose the first informative priors on quantum states and channels. Finally, we develop a method that allows online tracking of time-dependent states and estimates the drift and diffusion processes affecting a state. We provide source code and animated visual examples for our methods.
Gurau, Razvan
2017-01-01
Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Hess, Siegfried
2015-01-01
This book presents the science of tensors in a didactic way. The various types and ranks of tensors and the physical basis is presented. Cartesian Tensors are needed for the description of directional phenomena in many branches of physics and for the characterization the anisotropy of material properties. The first sections of the book provide an introduction to the vector and tensor algebra and analysis, with applications to physics, at undergraduate level. Second rank tensors, in particular their symmetries, are discussed in detail. Differentiation and integration of fields, including generalizations of the Stokes law and the Gauss theorem, are treated. The physics relevant for the applications in mechanics, quantum mechanics, electrodynamics and hydrodynamics is presented. The second part of the book is devoted to tensors of any rank, at graduate level. Special topics are irreducible, i.e. symmetric traceless tensors, isotropic tensors, multipole potential tensors, spin tensors, integration and spin-...
InfTucker: t-Process based Infinite Tensor Decomposition
Xu, Zenglin; Yuan,; Qi,
2011-01-01
Tensor decomposition is a powerful tool for multiway data analysis. Many popular tensor decomposition approaches---such as the Tucker decomposition and CANDECOMP/PARAFAC (CP)---conduct multi-linear factorization. They are insufficient to model (i) complex interactions between data entities, (ii) various data types (e.g. missing data and binary data), and (iii) noisy observations and outliers. To address these issues, we propose a tensor-variate latent $t$ process model, InfTucker, for robust multiway data analysis: it conducts robust Tucker decomposition in an infinite feature space. Unlike classical tensor decomposition models, it handles both continuous and binary data in a probabilistic framework. Unlike previous nonparametric Bayesian models on matrices and tensors, our latent $t$-process model focuses on multiway analysis and uses nonlinear covariance functions. To efficiently learn InfTucker from data, we develop a novel variational inference technique on tensors. Compared with classical implementation,...
Yapi, Richard B; Chammartin, Frédérique; Hürlimann, Eveline; Houngbedji, Clarisse A; N'Dri, Prisca B; Silué, Kigbafori D; Utzinger, Jürg; N'Goran, Eliézer K; Vounatsou, Penelope; Raso, Giovanna
2016-03-21
Soil-transmitted helminthiasis affects more than a billion people in the world and accounts for a global burden of 5.1 million disability-adjusted life years. The objectives of this study were (i) to map and predict the risk of soil-transmitted helminth infections among school-aged children in Côte d'Ivoire; (ii) to estimate school-aged children population-adjusted risk; and (iii) to estimate annual needs for preventive chemotherapy. In late 2011/early 2012, a cross-sectional survey was carried out among school-aged children in 92 localities of Côte d'Ivoire. Children provided a single stool sample that was subjected to duplicate Kato-Katz thick smears for the diagnosis of soil-transmitted helminths. A Bayesian geostatistical variable selection approach was employed to identify environmental and socioeconomic risk factors for soil-transmitted helminth infections. Bayesian kriging was used to predict soil-transmitted helminth infections on a grid of 1 × 1 km spatial resolution. The number of school-aged children infected with soil-transmitted helminths and the amount of doses needed for preventive chemotherapy according to World Health Organization guidelines were estimated. Parasitological data were available from 5246 children aged 5-16 years. Helminth infections with hookworm were predominant (17.2 %). Ascaris lumbricoides and Trichuris trichiura were rarely found; overall prevalences were 1.9 % and 1.2 %, respectively. Bayesian geostatistical variable selection identified rural setting for hookworm, soil acidity and soil moisture for A. lumbricoides, and rainfall coefficient of variation for T. trichiura as main predictors of infection. The estimated school-aged children population-adjusted risk of soil-transmitted helminth infection in Côte d'Ivoire is 15.5 % (95 % confidence interval: 14.2-17.0 %). We estimate that approximately 1.3 million doses of albendazole or mebendazole are required for school-based preventive chemotherapy, and we provide school
Wen, Fang-Qing; Zhang, Gong; Ben, De
2015-11-01
This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.
Institute of Scientific and Technical Information of China (English)
文方青; 张弓; 贲德
2015-01-01
This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes com-pressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to ac-curately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms.
Kim, Jang-Gyeong; Kwon, Hyun-Han; Kim, Dongkyun
2017-01-01
Poisson cluster stochastic rainfall generators (e.g., modified Bartlett-Lewis rectangular pulse, MBLRP) have been widely applied to generate synthetic sub-daily rainfall sequences. The MBLRP model reproduces the underlying distribution of the rainfall generating process. The existing optimization techniques are typically based on individual parameter estimates that treat each parameter as independent. However, parameter estimates sometimes compensate for the estimates of other parameters, which can cause high variability in the results if the covariance structure is not formally considered. Moreover, uncertainty associated with model parameters in the MBLRP rainfall generator is not usually addressed properly. Here, we develop a hierarchical Bayesian model (HBM)-based MBLRP model to jointly estimate parameters across weather stations and explicitly consider the covariance and uncertainty through a Bayesian framework. The model is tested using weather stations in South Korea. The HBM-based MBLRP model improves the identification of parameters with better reproduction of rainfall statistics at various temporal scales. Additionally, the spatial variability of the parameters across weather stations is substantially reduced compared to that of other methods.
Joint maximum likelihood and Bayesian channel estimation%联合最大似然贝叶斯信道估计
Institute of Scientific and Technical Information of China (English)
沈壁川; 郑建宏; 申敏
2008-01-01
在高信噪比情况下统计贝叶斯估计是一种有效的信道估计方法,但是在低信噪比时由于噪声估计不准确,其性能逐渐下降.研究了基于鲁棒的非线性降噪方法,提出了一个简化的联合最大似然贝叶斯信道估计.计算机仿真结果和分析表明这种方法在较高和较低的信噪比情况下,提高了信道估计和联合检测的性能.%Statistical Bayesian channel estimation is effective in suppressing noise floor for high SNR, but its performance degrades due to less reliable noise estimation in low SNR region. Based on a robust nonlinear de-noising technique for small signal, a simplified joint maximum likelihood and Bayesian channel estimation is proposed and investigated. Simulation results are presented and analysis shows it is promising to improve channel estimation and joint detection performance for both low and high SNR situations.
Directory of Open Access Journals (Sweden)
T. Getachew
2016-01-01
Full Text Available Test evaluation in the absence of a gold standard test was conducted for the diagnosis and screening of bovine brucellosis using three commercially available tests including RBPT, CFT, and I-ELISA in National Animal Health Diagnostic and Investigation Center (NAHDIC Ethiopia. A total of 278 sera samples from five dairy herds were collected and tested. Each serum sample was subjected to the three tests and the results obtained were recorded and the test outcomes were cross-classified to estimate the sensitivity and specificity of the tests using Bayesian model. Prior information generated on the sensitivity and specificity of bovine brucellosis from published data was used in the model. The three test-one population Bayesian model was modified and applied using WinBug software with the assumption that the dairy herds have similar management system and unknown disease status. The Bayesian posterior estimate for sensitivity was 89.6 (95% PI: 79.9–95.8, 96.8 (95% PI: 92.3–99.1, and 94 (95% PI: 87.8–97.5 and for specificity was 84.5 (95% PI: 68–94.98, 96.3 (95% PI: 91.7–98.8, and 88.5 (95% PI: 81–93.8 for RBT, I-ELISA, and CFT, respectively. In this study I-ELISA was found with the best sensitivity and specificity estimates 96.8 (95% PI: 92.3–99.1 and 96.3 (95% PI: 91.7–98.8, compared to both CFT and RBPT.
Getachew, T.; Getachew, G.; Getenet, M.; Fasil, A.
2016-01-01
Test evaluation in the absence of a gold standard test was conducted for the diagnosis and screening of bovine brucellosis using three commercially available tests including RBPT, CFT, and I-ELISA in National Animal Health Diagnostic and Investigation Center (NAHDIC) Ethiopia. A total of 278 sera samples from five dairy herds were collected and tested. Each serum sample was subjected to the three tests and the results obtained were recorded and the test outcomes were cross-classified to estimate the sensitivity and specificity of the tests using Bayesian model. Prior information generated on the sensitivity and specificity of bovine brucellosis from published data was used in the model. The three test-one population Bayesian model was modified and applied using WinBug software with the assumption that the dairy herds have similar management system and unknown disease status. The Bayesian posterior estimate for sensitivity was 89.6 (95% PI: 79.9–95.8), 96.8 (95% PI: 92.3–99.1), and 94 (95% PI: 87.8–97.5) and for specificity was 84.5 (95% PI: 68–94.98), 96.3 (95% PI: 91.7–98.8), and 88.5 (95% PI: 81–93.8) for RBT, I-ELISA, and CFT, respectively. In this study I-ELISA was found with the best sensitivity and specificity estimates 96.8 (95% PI: 92.3–99.1) and 96.3 (95% PI: 91.7–98.8), compared to both CFT and RBPT. PMID:27595036
Getachew, T; Getachew, G; Sintayehu, G; Getenet, M; Fasil, A
2016-01-01
Test evaluation in the absence of a gold standard test was conducted for the diagnosis and screening of bovine brucellosis using three commercially available tests including RBPT, CFT, and I-ELISA in National Animal Health Diagnostic and Investigation Center (NAHDIC) Ethiopia. A total of 278 sera samples from five dairy herds were collected and tested. Each serum sample was subjected to the three tests and the results obtained were recorded and the test outcomes were cross-classified to estimate the sensitivity and specificity of the tests using Bayesian model. Prior information generated on the sensitivity and specificity of bovine brucellosis from published data was used in the model. The three test-one population Bayesian model was modified and applied using WinBug software with the assumption that the dairy herds have similar management system and unknown disease status. The Bayesian posterior estimate for sensitivity was 89.6 (95% PI: 79.9-95.8), 96.8 (95% PI: 92.3-99.1), and 94 (95% PI: 87.8-97.5) and for specificity was 84.5 (95% PI: 68-94.98), 96.3 (95% PI: 91.7-98.8), and 88.5 (95% PI: 81-93.8) for RBT, I-ELISA, and CFT, respectively. In this study I-ELISA was found with the best sensitivity and specificity estimates 96.8 (95% PI: 92.3-99.1) and 96.3 (95% PI: 91.7-98.8), compared to both CFT and RBPT.
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter
Overstall, Antony M; Woods, David C
2013-06-01
Bayesian inference is considered for statistical models that depend on the evaluation of a computationally expensive computer code or simulator. For such situations, the number of evaluations of the likelihood function, and hence of the unnormalized posterior probability density function, is determined by the available computational resource and may be extremely limited. We present a new example of such a simulator that describes the properties of human embryonic stem cells using data from optical trapping experiments. This application is used to motivate a novel strategy for Bayesian inference which exploits a Gaussian process approximation of the simulator and allows computationally efficient Markov chain Monte Carlo inference. The advantages of this strategy over previous methodology are that it is less reliant on the determination of tuning parameters and allows the application of model diagnostic procedures that require no additional evaluations of the simulator. We show the advantages of our method on synthetic examples and demonstrate its application on stem cell experiments.
Directory of Open Access Journals (Sweden)
Hesham M. Sallam
2016-03-01
Full Text Available The Fayum Depression of Egypt has yielded fossils of hystricognathous rodents from multiple Eocene and Oligocene horizons that range in age from ∼37 to ∼30 Ma and document several phases in the early evolution of crown Hystricognathi and one of its major subclades, Phiomorpha. Here we describe two new genera and species of basal phiomorphs, Birkamys korai and Mubhammys vadumensis, based on rostra and maxillary and mandibular remains from the terminal Eocene (∼34 Ma Fayum Locality 41 (L-41. Birkamys is the smallest known Paleogene hystricognath, has very simple molars, and, like derived Oligocene-to-Recent phiomorphs (but unlike contemporaneous and older taxa apparently retained dP4∕4 late into life, with no evidence for P4∕4 eruption or formation. Mubhammys is very similar in dental morphology to Birkamys, and also shows no evidence for P4∕4 formation or eruption, but is considerably larger. Though parsimony analysis with all characters equally weighted places Birkamys and Mubhammys as sister taxa of extant Thryonomys to the exclusion of much younger relatives of that genus, all other methods (standard Bayesian inference, Bayesian “tip-dating,” and parsimony analysis with scaled transitions between “fixed” and polymorphic states place these species in more basal positions within Hystricognathi, as sister taxa of Oligocene-to-Recent phiomorphs. We also employ tip-dating as a means for estimating the ages of early hystricognath-bearing localities, many of which are not well-constrained by geological, geochronological, or biostratigraphic evidence. By simultaneously taking into account phylogeny, evolutionary rates, and uniform priors that appropriately encompass the range of possible ages for fossil localities, dating of tips in this Bayesian framework allows paleontologists to move beyond vague and assumption-laden “stage of evolution” arguments in biochronology to provide relatively rigorous age assessments of poorly